Sample records for sensitivity analysis code

  1. Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III

    1996-01-01

    Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.

  2. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  3. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  4. Sensitivity Analysis and Uncertainty Quantification for the LAMMPS Molecular Dynamics Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Picard, Richard Roy; Bhat, Kabekode Ghanasham

    2017-07-18

    We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.

  5. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  6. Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2010-06-01

    The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of themore » implementation and testing of SUSA at the INL VHTR Project Office.« less

  7. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  8. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  9. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  10. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  11. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  12. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  13. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  14. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  15. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  16. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  17. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  18. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  19. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  20. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  1. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  2. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  3. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  4. New technologies for advanced three-dimensional optimum shape design in aeronautics

    NASA Astrophysics Data System (ADS)

    Dervieux, Alain; Lanteri, Stéphane; Malé, Jean-Michel; Marco, Nathalie; Rostaing-Schmidt, Nicole; Stoufflet, Bruno

    1999-05-01

    The analysis of complex flows around realistic aircraft geometries is becoming more and more predictive. In order to obtain this result, the complexity of flow analysis codes has been constantly increasing, involving more refined fluid models and sophisticated numerical methods. These codes can only run on top computers, exhausting their memory and CPU capabilities. It is, therefore, difficult to introduce best analysis codes in a shape optimization loop: most previous works in the optimum shape design field used only simplified analysis codes. Moreover, as the most popular optimization methods are the gradient-based ones, the more complex the flow solver, the more difficult it is to compute the sensitivity code. However, emerging technologies are contributing to make such an ambitious project, of including a state-of-the-art flow analysis code into an optimisation loop, feasible. Among those technologies, there are three important issues that this paper wishes to address: shape parametrization, automated differentiation and parallel computing. Shape parametrization allows faster optimization by reducing the number of design variable; in this work, it relies on a hierarchical multilevel approach. The sensitivity code can be obtained using automated differentiation. The automated approach is based on software manipulation tools, which allow the differentiation to be quick and the resulting differentiated code to be rather fast and reliable. In addition, the parallel algorithms implemented in this work allow the resulting optimization software to run on increasingly larger geometries. Copyright

  5. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  6. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  7. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Branch, Oliver; Attinger, Sabine; Thober, Stephan

    2016-09-01

    Land surface models incorporate a large number of process descriptions, containing a multitude of parameters. These parameters are typically read from tabulated input files. Some of these parameters might be fixed numbers in the computer code though, which hinder model agility during calibration. Here we identified 139 hard-coded parameters in the model code of the Noah land surface model with multiple process options (Noah-MP). We performed a Sobol' global sensitivity analysis of Noah-MP for a specific set of process options, which includes 42 out of the 71 standard parameters and 75 out of the 139 hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated at 12 catchments within the United States with very different hydrometeorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its applicable standard parameters (i.e., Sobol' indexes above 1%). The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for direct evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities because of their tight coupling via the water balance. A calibration of Noah-MP against either of these fluxes should therefore give comparable results. Moreover, these fluxes are sensitive to both plant and soil parameters. Calibrating, for example, only soil parameters hence limit the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  8. Validation of ICD-9 Codes for Stable Miscarriage in the Emergency Department.

    PubMed

    Quinley, Kelly E; Falck, Ailsa; Kallan, Michael J; Datner, Elizabeth M; Carr, Brendan G; Schreiber, Courtney A

    2015-07-01

    International Classification of Disease, Ninth Revision (ICD-9) diagnosis codes have not been validated for identifying cases of missed abortion where a pregnancy is no longer viable but the cervical os remains closed. Our goal was to assess whether ICD-9 code "632" for missed abortion has high sensitivity and positive predictive value (PPV) in identifying patients in the emergency department (ED) with cases of stable early pregnancy failure (EPF). We studied females ages 13-50 years presenting to the ED of an urban academic medical center. We approached our analysis from two perspectives, evaluating both the sensitivity and PPV of ICD-9 code "632" in identifying patients with stable EPF. All patients with chief complaints "pregnant and bleeding" or "pregnant and cramping" over a 12-month period were identified. We randomly reviewed two months of patient visits and calculated the sensitivity of ICD-9 code "632" for true cases of stable miscarriage. To establish the PPV of ICD-9 code "632" for capturing missed abortions, we identified patients whose visits from the same time period were assigned ICD-9 code "632," and identified those with actual cases of stable EPF. We reviewed 310 patient records (17.6% of 1,762 sampled). Thirteen of 31 patient records assigned ICD-9 code for missed abortion correctly identified cases of stable EPF (sensitivity=41.9%), and 140 of the 142 patients without EPF were not assigned the ICD-9 code "632"(specificity=98.6%). Of the 52 eligible patients identified by ICD-9 code "632," 39 cases met the criteria for stable EPF (PPV=75.0%). ICD-9 code "632" has low sensitivity for identifying stable EPF, but its high specificity and moderately high PPV are valuable for studying cases of stable EPF in epidemiologic studies using administrative data.

  9. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  10. Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel

    NASA Astrophysics Data System (ADS)

    Edelmann, Paul G.

    There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.

  11. Impact of the hard-coded parameters on the hydrologic fluxes of the land surface model Noah-MP

    NASA Astrophysics Data System (ADS)

    Cuntz, Matthias; Mai, Juliane; Samaniego, Luis; Clark, Martyn; Wulfmeyer, Volker; Attinger, Sabine; Thober, Stephan

    2016-04-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The process descriptions contain a number of parameters that can be soil or plant type dependent and are typically read from tabulated input files. Land surface models may have, however, process descriptions that contain fixed, hard-coded numbers in the computer code, which are not identified as model parameters. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the importance of the fixed values on restricting the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options, which are mostly spatially constant values. This is in addition to the 71 standard parameters of Noah-MP, which mostly get distributed spatially by given vegetation and soil input maps. We performed a Sobol' global sensitivity analysis of Noah-MP to variations of the standard and hard-coded parameters for a specific set of process options. 42 standard parameters and 75 hard-coded parameters were active with the chosen process options. The sensitivities of the hydrologic output fluxes latent heat and total runoff as well as their component fluxes were evaluated. These sensitivities were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's hydrologic output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Surface runoff is sensitive to almost all hard-coded parameters of the snow processes and the meteorological inputs. These parameter sensitivities diminish in total runoff. Assessing these parameters in model calibration would require detailed snow observations or the calculation of hydrologic signatures of the runoff data. Latent heat and total runoff exhibit very similar sensitivities towards standard and hard-coded parameters in Noah-MP because of their tight coupling via the water balance. It should therefore be comparable to calibrate Noah-MP either against latent heat observations or against river runoff data. Latent heat and total runoff are sensitive to both, plant and soil parameters. Calibrating only a parameter sub-set of only soil parameters, for example, thus limits the ability to derive realistic model parameters. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  12. Discrete sensitivity derivatives of the Navier-Stokes equations with a parallel Krylov solver

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Taylor, Arthur C., III

    1994-01-01

    This paper solves an 'incremental' form of the sensitivity equations derived by differentiating the discretized thin-layer Navier Stokes equations with respect to certain design variables of interest. The equations are solved with a parallel, preconditioned Generalized Minimal RESidual (GMRES) solver on a distributed-memory architecture. The 'serial' sensitivity analysis code is parallelized by using the Single Program Multiple Data (SPMD) programming model, domain decomposition techniques, and message-passing tools. Sensitivity derivatives are computed for low and high Reynolds number flows over a NACA 1406 airfoil on a 32-processor Intel Hypercube, and found to be identical to those computed on a single-processor Cray Y-MP. It is estimated that the parallel sensitivity analysis code has to be run on 40-50 processors of the Intel Hypercube in order to match the single-processor processing time of a Cray Y-MP.

  13. Comparative sequence analysis of acid sensitive/resistance proteins in Escherichia coli and Shigella flexneri

    PubMed Central

    Manikandan, Selvaraj; Balaji, Seetharaaman; Kumar, Anil; Kumar, Rita

    2007-01-01

    The molecular basis for the survival of bacteria under extreme conditions in which growth is inhibited is a question of great current interest. A preliminary study was carried out to determine residue pattern conservation among the antiporters of enteric bacteria, responsible for extreme acid sensitivity especially in Escherichia coli and Shigella flexneri. Here we found the molecular evidence that proved the relationship between E. coli and S. flexneri. Multiple sequence alignment of the gadC coded acid sensitive antiporter showed many conserved residue patterns at regular intervals at the N-terminal region. It was observed that as the alignment approaches towards the C-terminal, the number of conserved residues decreases, indicating that the N-terminal region of this protein has much active role when compared to the carboxyl terminal. The motif, FHLVFFLLLGG, is well conserved within the entire gadC coded protein at the amino terminal. The motif is also partially conserved among other antiporters (which are not coded by gadC) but involved in acid sensitive/resistance mechanism. Phylogenetic cluster analysis proves the relationship of Escherichia coli and Shigella flexneri. The gadC coded proteins are converged as a clade and diverged from other antiporters belongs to the amino acid-polyamine-organocation (APC) superfamily. PMID:21670792

  14. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  15. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  16. Jet-A reaction mechanism study for combustion application

    NASA Technical Reports Server (NTRS)

    Lee, Chi-Ming; Kundu, Krishna; Acosta, Waldo

    1991-01-01

    Simplified chemical kinetic reaction mechanisms for the combustion of Jet A fuel was studied. Initially, 40 reacting species and 118 elementary chemical reactions were chosen based on a literature review. Through a sensitivity analysis with the use of LSENS General Kinetics and Sensitivity Analysis Code, 16 species and 21 elementary chemical reactions were determined from this study. This mechanism is first justified by comparison of calculated ignition delay time with the available shock tube data, then it is validated by comparison of calculated emissions from the plug flow reactor code with in-house flame tube data.

  17. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  18. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  19. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less

  20. Optimization Issues with Complex Rotorcraft Comprehensive Analysis

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.

    1998-01-01

    This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.

  1. SensA: web-based sensitivity analysis of SBML models.

    PubMed

    Floettmann, Max; Uhlendorf, Jannis; Scharp, Till; Klipp, Edda; Spiesser, Thomas W

    2014-10-01

    SensA is a web-based application for sensitivity analysis of mathematical models. The sensitivity analysis is based on metabolic control analysis, computing the local, global and time-dependent properties of model components. Interactive visualization facilitates interpretation of usually complex results. SensA can contribute to the analysis, adjustment and understanding of mathematical models for dynamic systems. SensA is available at http://gofid.biologie.hu-berlin.de/ and can be used with any modern browser. The source code can be found at https://bitbucket.org/floettma/sensa/ (MIT license) © The Author 2014. Published by Oxford University Press.

  2. CFD Sensitivity Analysis of a Modern Civil Transport Near Buffet-Onset Conditions

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Allison, Dennis O.; Biedron, Robert T.; Buning, Pieter G.; Gainer, Thomas G.; Morrison, Joseph H.; Rivers, S. Melissa; Mysko, Stephen J.; Witkowski, David P.

    2001-01-01

    A computational fluid dynamics (CFD) sensitivity analysis is conducted for a modern civil transport at several conditions ranging from mostly attached flow to flow with substantial separation. Two different Navier-Stokes computer codes and four different turbulence models are utilized, and results are compared both to wind tunnel data at flight Reynolds number and flight data. In-depth CFD sensitivities to grid, code, spatial differencing method, aeroelastic shape, and turbulence model are described for conditions near buffet onset (a condition at which significant separation exists). In summary, given a grid of sufficient density for a given aeroelastic wing shape, the combined approximate error band in CFD at conditions near buffet onset due to code, spatial differencing method, and turbulence model is: 6% in lift, 7% in drag, and 16% in moment. The biggest two contributers to this uncertainty are turbulence model and code. Computed results agree well with wind tunnel surface pressure measurements both for an overspeed 'cruise' case as well as a case with small trailing edge separation. At and beyond buffet onset, computed results agree well over the inner half of the wing, but shock location is predicted too far aft at some of the outboard stations. Lift, drag, and moment curves are predicted in good agreement with experimental results from the wind tunnel.

  3. Space station integrated wall design and penetration damage control

    NASA Technical Reports Server (NTRS)

    Coronado, A. R.; Gibbins, M. N.; Wright, M. A.; Stern, P. H.

    1987-01-01

    The analysis code BUMPER executes a numerical solution to the problem of calculating the probability of no penetration (PNP) of a spacecraft subject to man-made orbital debris or meteoroid impact. The codes were developed on a DEC VAX 11/780 computer that uses the Virtual Memory System (VMS) operating system, which is written in FORTRAN 77 with no VAX extensions. To help illustrate the steps involved, a single sample analysis is performed. The example used is the space station reference configuration. The finite element model (FEM) of this configuration is relatively complex but demonstrates many BUMPER features. The computer tools and guidelines are described for constructing a FEM for the space station under consideration. The methods used to analyze the sensitivity of PNP to variations in design, are described. Ways are suggested for developing contour plots of the sensitivity study data. Additional BUMPER analysis examples are provided, including FEMs, command inputs, and data outputs. The mathematical theory used as the basis for the code is described, and illustrates the data flow within the analysis.

  4. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    NASA Astrophysics Data System (ADS)

    Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.

    2017-04-01

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.

  5. What Do Differences Between Multi-voxel and Univariate Analysis Mean? How Subject-, Voxel-, and Trial-level Variance Impact fMRI Analysis

    PubMed Central

    Davis, Tyler; LaRocque, Karen F.; Mumford, Jeanette; Norman, Kenneth A.; Wagner, Anthony D.; Poldrack, Russell A.

    2014-01-01

    Multi-voxel pattern analysis (MVPA) has led to major changes in how fMRI data are analyzed and interpreted. Many studies now report both MVPA results and results from standard univariate voxel-wise analysis, often with the goal of drawing different conclusions from each. Because MVPA results can be sensitive to latent multidimensional representations and processes whereas univariate voxel-wise analysis cannot, one conclusion that is often drawn when MVPA and univariate results differ is that the activation patterns underlying MVPA results contain a multidimensional code. In the current study, we conducted simulations to formally test this assumption. Our findings reveal that MVPA tests are sensitive to the magnitude of voxel-level variability in the effect of a condition within subjects, even when the same linear relationship is coded in all voxels. We also find that MVPA is insensitive to subject-level variability in mean activation across an ROI, which is the primary variance component of interest in many standard univariate tests. Together, these results illustrate that differences between MVPA and univariate tests do not afford conclusions about the nature or dimensionality of the neural code. Instead, targeted tests of the informational content and/or dimensionality of activation patterns are critical for drawing strong conclusions about the representational codes that are indicated by significant MVPA results. PMID:24768930

  6. Analysis of the temperature sensitivity of Japanese rubella vaccine strain TO-336.vac and its effect on immunogenicity in the guinea pig.

    PubMed

    Okamoto, Kiyoko; Ami, Yasushi; Suzaki, Yuriko; Otsuki, Noriyuki; Sakata, Masafumi; Takeda, Makoto; Mori, Yoshio

    2016-04-01

    The marker of Japanese domestic rubella vaccines is their lack of immunogenicity in guinea pigs. This has long been thought to be related to the temperature sensitivity of the viruses, but supporting evidence has not been described. In this study, we generated infectious clones of TO-336.vac, a Japanese domestic vaccine, TO-336.GMK5, the parental virus of TO-336.vac, and their mutants, and determined the molecular bases of their temperature sensitivity and immunogenicity in guinea pigs. The results revealed that Ser(1159) in the non-structural protein-coding region was responsible for the temperature sensitivity of TO-336.vac dominantly, while the structural protein-coding region affected the temperature sensitivity subordinately. The findings further suggested that the temperature sensitivity of TO-336.vac affected the antibody induction in guinea pigs after subcutaneous inoculation. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Validation of Carotid Artery Revascularization Coding in Ontario Health Administrative Databases.

    PubMed

    Hussain, Mohamad A; Mamdani, Muhammad; Saposnik, Gustavo; Tu, Jack V; Turkel-Parrella, David; Spears, Julian; Al-Omran, Mohammed

    2016-04-02

    The positive predictive value (PPV) of carotid endarterectomy (CEA) and carotid artery stenting (CAS) procedure and post-operative complication coding were assessed in Ontario health administrative databases. Between 1 April 2002 and 31 March 2014, a random sample of 428 patients were identified using Canadian Classification of Health Intervention (CCI) procedure codes and Ontario Health Insurance Plan (OHIP) billing codes from administrative data. A blinded chart review was conducted at two high-volume vascular centers to assess the level of agreement between the administrative records and the corresponding patients' hospital charts. PPV was calculated with 95% confidence intervals (CIs) to estimate the validity of CEA and CAS coding, utilizing hospital charts as the gold standard. Sensitivity of CEA and CAS coding were also assessed by linking two independent databases of 540 CEA-treated patients (Ontario Stroke Registry) and 140 CAS-treated patients (single-center CAS database) to administrative records. PPV for CEA ranged from 99% to 100% and sensitivity ranged from 81.5% to 89.6% using CCI and OHIP codes. A CCI code with a PPV of 87% (95% CI, 78.8-92.9) and sensitivity of 92.9% (95% CI, 87.4-96.1) in identifying CAS was also identified. PPV for post-admission complication diagnosis coding was 71.4% (95% CI, 53.7-85.4) for stroke/transient ischemic attack, and 82.4% (95% CI, 56.6-96.2) for myocardial infarction. Our analysis demonstrated that the codes used in administrative databases accurately identify CEA and CAS-treated patients. Researchers can confidently use administrative data to conduct population-based studies of CEA and CAS.

  8. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  9. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  10. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  11. A PARAMETRIC STUDY OF BCS RF SURFACE IMPEDANCE WITH MAGNETIC FIELD USING THE XIAO CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reece, Charles E.; Xiao, Binping

    2013-09-01

    A recent new analysis of field-dependent BCS rf surface impedance based on moving Cooper pairs has been presented.[1] Using this analysis coded in Mathematica TM, survey calculations have been completed which examine the sensitivities of this surface impedance to variation of the BCS material parameters and temperature. The results present a refined description of the "best theoretical" performance available to potential applications with corresponding materials.

  12. Validation of Case Finding Algorithms for Hepatocellular Cancer from Administrative Data and Electronic Health Records using Natural Language Processing

    PubMed Central

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2013-01-01

    Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403

  13. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    PubMed

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  14. Eigenvalue Contributon Estimator for Sensitivity Calculations with TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Williams, Mark L

    2007-01-01

    Since the release of the Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) codes in SCALE [1], the use of sensitivity and uncertainty analysis techniques for criticality safety applications has greatly increased within the user community. In general, sensitivity and uncertainty analysis is transitioning from a technique used only by specialists to a practical tool in routine use. With the desire to use the tool more routinely comes the need to improve the solution methodology to reduce the input and computational burden on the user. This paper reviews the current solution methodology of the Monte Carlo eigenvalue sensitivity analysismore » sequence TSUNAMI-3D, describes an alternative approach, and presents results from both methodologies.« less

  15. FY17 Status Report on NEAMS Neutronics Activities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, C. H.; Jung, Y. S.; Smith, M. A.

    2017-09-30

    Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less

  16. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  17. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  18. Hybrid Raman/Brillouin-optical-time-domain-analysis-distributed optical fiber sensors based on cyclic pulse coding.

    PubMed

    Taki, M; Signorini, A; Oton, C J; Nannipieri, T; Di Pasquale, F

    2013-10-15

    We experimentally demonstrate the use of cyclic pulse coding for distributed strain and temperature measurements in hybrid Raman/Brillouin optical time-domain analysis (BOTDA) optical fiber sensors. The highly integrated proposed solution effectively addresses the strain/temperature cross-sensitivity issue affecting standard BOTDA sensors, allowing for simultaneous meter-scale strain and temperature measurements over 10 km of standard single mode fiber using a single narrowband laser source only.

  19. JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY

    EPA Science Inventory

    The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.

  20. Sensitivity analysis of Monju using ERANOS with JENDL-4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less

  1. Shape design sensitivity analysis and optimization of three dimensional elastic solids using geometric modeling and automatic regridding. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Yao, Tse-Min; Choi, Kyung K.

    1987-01-01

    An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.

  2. Sensitivity analysis of tall buildings in Semarang, Indonesia due to fault earthquakes with maximum 7 Mw

    NASA Astrophysics Data System (ADS)

    Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian

    2017-11-01

    Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.

  3. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  4. Efficient genome-wide association in biobanks using topic modeling identifies multiple novel disease loci.

    PubMed

    McCoy, Thomas H; Castro, Victor M; Snapper, Leslie A; Hart, Kamber L; Perlis, Roy H

    2017-08-31

    Biobanks and national registries represent a powerful tool for genomic discovery, but rely on diagnostic codes that may be unreliable and fail to capture the relationship between related diagnoses. We developed an efficient means of conducting genome-wide association studies using combinations of diagnostic codes from electronic health records (EHR) for 10845 participants in a biobanking program at two large academic medical centers. Specifically, we applied latent Dirichilet allocation to fit 50 disease topics based on diagnostic codes, then conducted genome-wide common-variant association for each topic. In sensitivity analysis, these results were contrasted with those obtained from traditional single-diagnosis phenome-wide association analysis, as well as those in which only a subset of diagnostic codes are included per topic. In meta-analysis across three biobank cohorts, we identified 23 disease-associated loci with p<1e-15, including previously associated autoimmune disease loci. In all cases, observed significant associations were of greater magnitude than for single phenome-wide diagnostic codes, and incorporation of less strongly-loading diagnostic codes enhanced association. This strategy provides a more efficient means of phenome-wide association in biobanks with coded clinical data.

  5. Efficient Genome-wide Association in Biobanks Using Topic Modeling Identifies Multiple Novel Disease Loci

    PubMed Central

    McCoy, Thomas H; Castro, Victor M; Snapper, Leslie A; Hart, Kamber L; Perlis, Roy H

    2017-01-01

    Biobanks and national registries represent a powerful tool for genomic discovery, but rely on diagnostic codes that can be unreliable and fail to capture relationships between related diagnoses. We developed an efficient means of conducting genome-wide association studies using combinations of diagnostic codes from electronic health records for 10,845 participants in a biobanking program at two large academic medical centers. Specifically, we applied latent Dirichilet allocation to fit 50 disease topics based on diagnostic codes, then conducted a genome-wide common-variant association for each topic. In sensitivity analysis, these results were contrasted with those obtained from traditional single-diagnosis phenome-wide association analysis, as well as those in which only a subset of diagnostic codes were included per topic. In meta-analysis across three biobank cohorts, we identified 23 disease-associated loci with p < 1e-15, including previously associated autoimmune disease loci. In all cases, observed significant associations were of greater magnitude than single phenome-wide diagnostic codes, and incorporation of less strongly loading diagnostic codes enhanced association. This strategy provides a more efficient means of identifying phenome-wide associations in biobanks with coded clinical data. PMID:28861588

  6. Postoperative complications following colectomy for ulcerative colitis: A validation study

    PubMed Central

    2012-01-01

    Background Ulcerative colitis (UC) patients failing medical management require colectomy. This study compares risk estimates for predictors of postoperative complication derived from administrative data against that of chart review and evaluates the accuracy of administrative coding for this population. Methods Hospital administrative databases were used to identify adults with UC undergoing colectomy from 1996–2007. Medical charts were reviewed and regression analyses comparing chart versus administrative data were performed to assess the effect of age, emergent operation, and Charlson comorbidities on the occurrence of postoperative complications. Sensitivity, specificity, and positive/negative predictive values of administrative coding for identifying the study population, Charlson comorbidities, and postoperative complications were assessed. Results Compared to chart review, administrative data estimated a higher magnitude of effect for emergent admission (OR 2.52 [95% CI: 1.80–3.52] versus 1.49 [1.06–2.09]) and Charlson comorbidities (OR 2.91 [1.86–4.56] versus 1.50 [1.05–2.15]) as predictors of postoperative complications. Administrative data correctly identified UC and colectomy in 85.9% of cases. The administrative database was 37% sensitive in identifying patients with ≥ 1Charlson comorbidity. Restricting analysis to active comorbidities increased the sensitivity to 63%. The sensitivity of identifying patients with at least one postoperative complication was 68%; restricting analysis to more severe complications improved the sensitivity to 84%. Conclusions Administrative data identified the same risk factors for postoperative complications as chart review, but overestimated the magnitude of risk. This discrepancy may be explained by coding inaccuracies that selectively identifying the most serious complications and comorbidities. PMID:22943760

  7. Design for cyclic loading endurance of composites

    NASA Technical Reports Server (NTRS)

    Shiao, Michael C.; Murthy, Pappu L. N.; Chamis, Christos C.; Liaw, Leslie D. G.

    1993-01-01

    The application of the computer code IPACS (Integrated Probabilistic Assessment of Composite Structures) to aircraft wing type structures is described. The code performs a complete probabilistic analysis for composites taking into account the uncertainties in geometry, boundary conditions, material properties, laminate lay-ups, and loads. Results of the analysis are presented in terms of cumulative distribution functions (CDF) and probability density function (PDF) of the fatigue life of a wing type composite structure under different hygrothermal environments subjected to the random pressure. The sensitivity of the fatigue life to a number of critical structural/material variables is also computed from the analysis.

  8. Noise Analysis of Spatial Phase coding in analog Acoustooptic Processors

    NASA Technical Reports Server (NTRS)

    Gary, Charles K.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    Optical beams can carry information in their amplitude and phase; however, optical analog numerical calculators such as an optical matrix processor use incoherent light to achieve linear operation. Thus, the phase information is lost and only the magnitude can be used. This limits such processors to the representation of positive real numbers. Many systems have been devised to overcome this deficit through the use of digital number representations, but they all operate at a greatly reduced efficiency in contrast to analog systems. The most widely accepted method to achieve sign coding in analog optical systems has been the use of an offset for the zero level. Unfortunately, this results in increased noise sensitivity for small numbers. In this paper, we examine the use of spatially coherent sign coding in acoustooptical processors, a method first developed for digital calculations by D. V. Tigin. This coding technique uses spatial coherence for the representation of signed numbers, while temporal incoherence allows for linear analog processing of the optical information. We show how spatial phase coding reduces noise sensitivity for signed analog calculations.

  9. [MODIS Investigation

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1996-01-01

    Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.

  10. Billing code algorithms to identify cases of peripheral artery disease from administrative data

    PubMed Central

    Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J

    2013-01-01

    Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724

  11. Applications of automatic differentiation in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Carle, A.; Bischof, C.; Haigler, Kara J.; Newman, Perry A.

    1994-01-01

    Automatic differentiation (AD) is a powerful computational method that provides for computing exact sensitivity derivatives (SD) from existing computer programs for multidisciplinary design optimization (MDO) or in sensitivity analysis. A pre-compiler AD tool for FORTRAN programs called ADIFOR has been developed. The ADIFOR tool has been easily and quickly applied by NASA Langley researchers to assess the feasibility and computational impact of AD in MDO with several different FORTRAN programs. These include a state-of-the-art three dimensional multigrid Navier-Stokes flow solver for wings or aircraft configurations in transonic turbulent flow. With ADIFOR the user specifies sets of independent and dependent variables with an existing computer code. ADIFOR then traces the dependency path throughout the code, applies the chain rule to formulate derivative expressions, and generates new code to compute the required SD matrix. The resulting codes have been verified to compute exact non-geometric and geometric SD for a variety of cases. in less time than is required to compute the SD matrix using centered divided differences.

  12. Does the Genetic Code Have A Eukaryotic Origin?

    PubMed Central

    Zhang, Zhang; Yu, Jun

    2013-01-01

    In the RNA world, RNA is assumed to be the dominant macromolecule performing most, if not all, core “house-keeping” functions. The ribo-cell hypothesis suggests that the genetic code and the translation machinery may both be born of the RNA world, and the introduction of DNA to ribo-cells may take over the informational role of RNA gradually, such as a mature set of genetic code and mechanism enabling stable inheritance of sequence and its variation. In this context, we modeled the genetic code in two content variables—GC and purine contents—of protein-coding sequences and measured the purine content sensitivities for each codon when the sensitivity (% usage) is plotted as a function of GC content variation. The analysis leads to a new pattern—the symmetric pattern—where the sensitivity of purine content variation shows diagonally symmetry in the codon table more significantly in the two GC content invariable quarters in addition to the two existing patterns where the table is divided into either four GC content sensitivity quarters or two amino acid diversity halves. The most insensitive codon sets are GUN (valine) and CAN (CAR for asparagine and CAY for aspartic acid) and the most biased amino acid is valine (always over-estimated) followed by alanine (always under-estimated). The unique position of valine and its codons suggests its key roles in the final recruitment of the complete codon set of the canonical table. The distinct choice may only be attributable to sequence signatures or signals of splice sites for spliceosomal introns shared by all extant eukaryotes. PMID:23402863

  13. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the user inputs information that relates the fluid transport properties to the temperature.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Malley, Daniel; Vesselinov, Velimir V.

    MADSpython (Model analysis and decision support tools in Python) is a code in Python that streamlines the process of using data and models for analysis and decision support using the code MADS. MADS is open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11-035). MADS can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. The Python scripts in MADSpython facilitate the generation of input and output file needed by MADS as wells as the external simulators which include FEHM and PFLOTRAN. MADSpython enables a number of data-more » and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. MADSpython will be released under GPL V3 license. MADSpython will be distributed as a Git repo at gitlab.com and github.com. MADSpython manual and documentation will be posted at http://madspy.lanl.gov.« less

  15. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  16. Mads.jl

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinov, Velimir; O'Malley, Daniel; Lin, Youzuo

    2016-07-01

    Mads.jl (Model analysis and decision support in Julia) is a code that streamlines the process of using data and models for analysis and decision support. It is based on another open-source code developed at LANL and written in C/C++ (MADS; http://mads.lanl.gov; LA-CC-11- 035). Mads.jl can work with external models of arbitrary complexity as well as built-in models of flow and transport in porous media. It enables a number of data- and model-based analyses including model calibration, sensitivity analysis, uncertainty quantification, and decision analysis. The code also can use a series of alternative adaptive computational techniques for Bayesian sampling, Monte Carlo,more » and Bayesian Information-Gap Decision Theory. The code is implemented in the Julia programming language, and has high-performance (parallel) and memory management capabilities. The code uses a series of third party modules developed by others. The code development will also include contributions to the existing third party modules written in Julia; this contributions will be important for the efficient implementation of the algorithm used by Mads.jl. The code also uses a series of LANL developed modules that are developed by Dan O'Malley; these modules will be also a part of the Mads.jl release. Mads.jl will be released under GPL V3 license. The code will be distributed as a Git repo at gitlab.com and github.com. Mads.jl manual and documentation will be posted at madsjulia.lanl.gov.« less

  17. Tradeoffs of Using Administrative Claims and Medical Records to Identify the Use of Personalized Medicine for Patients with Breast Cancer

    PubMed Central

    Liang, Su-Ying; Phillips, Kathryn A.; Wang, Grace; Keohane, Carol; Armstrong, Joanne; Morris, William M.; Haas, Jennifer S.

    2012-01-01

    Background Administrative claims and medical records are important data sources to examine healthcare utilization and outcomes. Little is known about identifying personalized medicine technologies in these sources. Objectives To describe agreement, sensitivity, and specificity of administrative claims compared to medical records for two pairs of targeted tests and treatments for breast cancer. Research Design Retrospective analysis of medical records linked to administrative claims from a large health plan. We examined whether agreement varied by factors that facilitate tracking in claims (coding and cost) and that enhance medical record completeness (records from multiple providers). Subjects Women (35 – 65 years) with incident breast cancer diagnosed in 2006–2007 (n=775). Measures Use of human epidermal growth factor receptor 2 (HER2) and gene expression profiling (GEP) testing, trastuzumab and adjuvant chemotherapy in claims and medical records. Results Agreement between claims and records was substantial for GEP, trastuzumab, and chemotherapy, and lowest for HER2 tests. GEP, an expensive test with unique billing codes, had higher agreement (91.6% vs. 75.2%), sensitivity (94.9% vs. 76.7%), and specificity (90.1% vs. 29.2%) than HER2, a test without unique billing codes. Trastuzumab, a treatment with unique billing codes, had slightly higher agreement (95.1% vs. 90%) and sensitivity (98.1% vs. 87.9%) than adjuvant chemotherapy. Conclusions Higher agreement and specificity were associated with services that had unique billing codes and high cost. Administrative claims may be sufficient for examining services with unique billing codes. Medical records provide better data for identifying tests lacking specific codes and for research requiring detailed clinical information. PMID:21422962

  18. Tunable wavefront coded imaging system based on detachable phase mask: Mathematical analysis, optimization and underlying applications

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Wei, Jingxuan

    2014-09-01

    The key to the concept of tunable wavefront coding lies in detachable phase masks. Ojeda-Castaneda et al. (Progress in Electronics Research Symposium Proceedings, Cambridge, USA, July 5-8, 2010) described a typical design in which two components with cosinusoidal phase variation operate together to make defocus sensitivity tunable. The present study proposes an improved design and makes three contributions: (1) A mathematical derivation based on the stationary phase method explains why the detachable phase mask of Ojeda-Castaneda et al. tunes the defocus sensitivity. (2) The mathematical derivations show that the effective bandwidth wavefront coded imaging system is also tunable by making each component of the detachable phase mask move asymmetrically. An improved Fisher information-based optimization procedure was also designed to ascertain the optimal mask parameters corresponding to specific bandwidth. (3) Possible applications of the tunable bandwidth are demonstrated by simulated imaging.

  19. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  20. Isotope-coded ESI-enhancing derivatization reagents for differential analysis, quantification and profiling of metabolites in biological samples by LC/MS: A review.

    PubMed

    Higashi, Tatsuya; Ogawa, Shoujiro

    2016-10-25

    The analysis of the qualitative and quantitative changes of metabolites in body fluids and tissues yields valuable information for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-(tandem) mass spectrometry [LC/ESI-MS(/MS)] has been widely used for these purposes due to the high separation capability of LC, broad coverage of ESI for various compounds and high specificity of MS(/MS). However, there are still two major problems to be solved regarding the biological sample analysis; lack of sensitivity and limited availability of stable isotope-labeled analogues (internal standards, ISs) for most metabolites. Stable isotope-coded derivatization (ICD) can be the answer for these problems. By the ICD, different isotope-coded moieties are introduced to the metabolites and one of the resulting derivatives can serve as the IS, which minimize the matrix effects. Furthermore, the derivatization can improve the ESI efficiency, fragmentation property in the MS/MS and chromatographic behavior of the metabolites, which lead to a high sensitivity and specificity in the various detection modes. Based on this background, this article reviews the recently-reported isotope-coded ESI-enhancing derivatization (ICEED) reagents, which are key components for the ICD-based LC/MS(/MS) studies, and their applications to the detection, identification, quantification and profiling of metabolites in human and animal samples. The LC/MS(/MS) using the ICEED reagents is the powerful method especially for the differential analysis (relative quantification) of metabolites in two comparative samples, simultaneous quantification of multiple metabolites whose stable isotope-labeled ISs are not available, and submetabolome profiling. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Numerical Simulation of a Double-anode Magnetron Injection Gun for 110 GHz, 1 MW Gyrotron

    NASA Astrophysics Data System (ADS)

    Singh, Udaybir; Kumar, Nitin; Purohit, L. P.; Sinha, Ashok K.

    2010-07-01

    A 40 A double-anode magnetron injection gun for a 1 MW, 110 GHz gyrotron has been designed. The preliminary design has been obtained by using some trade-off equations. The electron beam analysis has been performed by using the commercially available code EGUN and the in-house developed code MIGANS. The operating mode of the gyrotron is TE22,6 and it is operated in the fundamental harmonic. The electron beam with a low transverse velocity spread ( δ {β_{ bot max }} = 2.26% ) and the transverse-to-axial velocity ratio of the electron beam (α) = 1.37 is obtained. The simulated results of the MIG obtained with the EGUN code have been validated with another trajectory code TRAK. The results on the design output parameters obtained by both the codes are in good agreement. The sensitivity analysis has been carried out by changing the different gun parameters to decide the fabrication tolerance.

  2. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  3. Nanoparticle based bio-bar code technology for trace analysis of aflatoxin B1 in Chinese herbs.

    PubMed

    Yu, Yu-Yan; Chen, Yuan-Yuan; Gao, Xuan; Liu, Yuan-Yuan; Zhang, Hong-Yan; Wang, Tong-Ying

    2018-04-01

    A novel and sensitive assay for aflatoxin B1 (AFB1) detection has been developed by using bio-bar code assay (BCA). The method that relies on polyclonal antibodies encoded with DNA modified gold nanoparticle (NP) and monoclonal antibodies modified magnetic microparticle (MMP), and subsequent detection of amplified target in the form of bio-bar code using a fluorescent quantitative polymerase chain reaction (FQ-PCR) detection method. First, NP probes encoded with DNA that was unique to AFB1, MMP probes with monoclonal antibodies that bind AFB1 specifically were prepared. Then, the MMP-AFB1-NP sandwich compounds were acquired, dehybridization of the oligonucleotides on the nanoparticle surface allows the determination of the presence of AFB1 by identifying the oligonucleotide sequence released from the NP through FQ-PCR detection. The bio-bar code techniques system for detecting AFB1 was established, and the sensitivity limit was about 10 -8  ng/mL, comparable ELISA assays for detecting the same target, it showed that we can detect AFB1 at low attomolar levels with the bio-bar-code amplification approach. This is also the first demonstration of a bio-bar code type assay for the detection of AFB1 in Chinese herbs. Copyright © 2017. Published by Elsevier B.V.

  4. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Small Business Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of small businesses in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each business establishment size category to each Instrumental Intensity level. The analysis concerns the direct effect of the earthquake on small businesses. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by business establishment size.

  5. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  6. Hypersonic Shock Interactions About a 25 deg/65 deg Sharp Double Cone

    NASA Technical Reports Server (NTRS)

    Moss, James N.; LeBeau, Gerald J.; Glass, Christopher E.

    2002-01-01

    This paper presents the results of a numerical study of shock interactions resulting from Mach 10 air flow about a sharp double cone. Computations are made with the direct simulation Monte Carlo (DSMC) method by using two different codes: the G2 code of Bird and the DAC (DSMC Analysis Code) code of LeBeau. The flow conditions are the pretest nominal free-stream conditions specified for the ONERA R5Ch low-density wind tunnel. The focus is on the sensitivity of the interactions to grid resolution while providing information concerning the flow structure and surface results for the extent of separation, heating, pressure, and skin friction.

  7. Identifying clinical features in primary care electronic health record studies: methods for codelist development.

    PubMed

    Watson, Jessica; Nicholson, Brian D; Hamilton, Willie; Price, Sarah

    2017-11-22

    Analysis of routinely collected electronic health record (EHR) data from primary care is reliant on the creation of codelists to define clinical features of interest. To improve scientific rigour, transparency and replicability, we describe and demonstrate a standardised reproducible methodology for clinical codelist development. We describe a three-stage process for developing clinical codelists. First, the clear definition a priori of the clinical feature of interest using reliable clinical resources. Second, development of a list of potential codes using statistical software to comprehensively search all available codes. Third, a modified Delphi process to reach consensus between primary care practitioners on the most relevant codes, including the generation of an 'uncertainty' variable to allow sensitivity analysis. These methods are illustrated by developing a codelist for shortness of breath in a primary care EHR sample, including modifiable syntax for commonly used statistical software. The codelist was used to estimate the frequency of shortness of breath in a cohort of 28 216 patients aged over 18 years who received an incident diagnosis of lung cancer between 1 January 2000 and 30 November 2016 in the Clinical Practice Research Datalink (CPRD). Of 78 candidate codes, 29 were excluded as inappropriate. Complete agreement was reached for 44 (90%) of the remaining codes, with partial disagreement over 5 (10%). 13 091 episodes of shortness of breath were identified in the cohort of 28 216 patients. Sensitivity analysis demonstrates that codes with the greatest uncertainty tend to be rarely used in clinical practice. Although initially time consuming, using a rigorous and reproducible method for codelist generation 'future-proofs' findings and an auditable, modifiable syntax for codelist generation enables sharing and replication of EHR studies. Published codelists should be badged by quality and report the methods of codelist generation including: definitions and justifications associated with each codelist; the syntax or search method; the number of candidate codes identified; and the categorisation of codes after Delphi review. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  8. The accuracy of burn diagnosis codes in health administrative data: A validation study.

    PubMed

    Mason, Stephanie A; Nathens, Avery B; Byrne, James P; Fowler, Rob; Gonzalez, Alejandro; Karanicolas, Paul J; Moineddin, Rahim; Jeschke, Marc G

    2017-03-01

    Health administrative databases may provide rich sources of data for the study of outcomes following burn. We aimed to determine the accuracy of International Classification of Diseases diagnoses codes for burn in a population-based administrative database. Data from a regional burn center's clinical registry of patients admitted between 2006-2013 were linked to administrative databases. Burn total body surface area (TBSA), depth, mechanism, and inhalation injury were compared between the registry and administrative records. The sensitivity, specificity, and positive and negative predictive values were determined, and coding agreement was assessed with the kappa statistic. 1215 burn center patients were linked to administrative records. TBSA codes were highly sensitive and specific for ≥10 and ≥20% TBSA (89/93% sensitive and 95/97% specific), with excellent agreement (κ, 0.85/κ, 0.88). Codes were weakly sensitive (68%) in identifying ≥10% TBSA full-thickness burn, though highly specific (86%) with moderate agreement (κ, 0.46). Codes for inhalation injury had limited sensitivity (43%) but high specificity (99%) with moderate agreement (κ, 0.54). Burn mechanism had excellent coding agreement (κ, 0.84). Administrative data diagnosis codes accurately identify burn by burn size and mechanism, while identification of inhalation injury or full-thickness burns is less sensitive but highly specific. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Marshall, William BJ J

    In the course of criticality code validation, outlier cases are frequently encountered. Historically, the causes of these unexpected results could be diagnosed only through comparison with other similar cases or through the known presence of a unique component of the critical experiment. The sensitivity and uncertainty (S/U) analysis tools available in the SCALE 6.1 code system provide a much broader range of options to examine underlying causes of outlier cases. This paper presents some case studies performed as a part of the recent validation of the KENO codes in SCALE 6.1 using S/U tools to examine potential causes of biases.

  10. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis

    PubMed Central

    Johnson, Jeffrey S.; Yin, Pingbo; O'Connor, Kevin N.

    2012-01-01

    Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1. PMID:22422997

  11. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  12. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  13. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.; Nguyen, Duc T.; Baddourah, Majdi; Qin, Jiangning

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigensolution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization search analysis and domain decomposition. The source code for many of these algorithms is available.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, E.P.; Johnson, K.I.; Simonen, F.A.

    The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less

  15. Moral sensitivity relating to the application of the code of ethics.

    PubMed

    Kim, Yong-Soon; Kang, Se-Won; Ahn, Jeong-Ah

    2013-06-01

    This study investigated the clinical application of the 2006 Third Revised Korean Nurses' Code of Ethics and the moral sensitivity of nurses. A total of 303 clinical nurses in South Korea participated in the survey in May and June 2011. As instruments of this study, we used the 15 statements of the Korean Nurses' Code of Ethics and Korean Moral Sensitivity Questionnaire. The mean score for application was 3.77 ± 0.59 (out of 5), and the mean score for moral sensitivity was 5.14 ± 0.55 (out of 7). The correlation coefficient (r) of the application and moral sensitivity was 0.336 (p < 0.001). Nurses who scored high on moral sensitivity also scored high on application (t = -5.018, p < 0.001). In clinical settings, educational programmes to develop the moral sensitivity of nurses are necessary for improving the application of the code of ethics.

  16. Uncertainty Quantification Techniques of SCALE/TSUNAMI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Mueller, Don

    2011-01-01

    The Standardized Computer Analysis for Licensing Evaluation (SCALE) code system developed at Oak Ridge National Laboratory (ORNL) includes Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI). The TSUNAMI code suite can quantify the predicted change in system responses, such as k{sub eff}, reactivity differences, or ratios of fluxes or reaction rates, due to changes in the energy-dependent, nuclide-reaction-specific cross-section data. Where uncertainties in the neutron cross-section data are available, the sensitivity of the system to the cross-section data can be applied to propagate the uncertainties in the cross-section data to an uncertainty in the system response. Uncertainty quantification ismore » useful for identifying potential sources of computational biases and highlighting parameters important to code validation. Traditional validation techniques often examine one or more average physical parameters to characterize a system and identify applicable benchmark experiments. However, with TSUNAMI correlation coefficients are developed by propagating the uncertainties in neutron cross-section data to uncertainties in the computed responses for experiments and safety applications through sensitivity coefficients. The bias in the experiments, as a function of their correlation coefficient with the intended application, is extrapolated to predict the bias and bias uncertainty in the application through trending analysis or generalized linear least squares techniques, often referred to as 'data adjustment.' Even with advanced tools to identify benchmark experiments, analysts occasionally find that the application models include some feature or material for which adequately similar benchmark experiments do not exist to support validation. For example, a criticality safety analyst may want to take credit for the presence of fission products in spent nuclear fuel. In such cases, analysts sometimes rely on 'expert judgment' to select an additional administrative margin to account for gap in the validation data or to conclude that the impact on the calculated bias and bias uncertainty is negligible. As a result of advances in computer programs and the evolution of cross-section covariance data, analysts can use the sensitivity and uncertainty analysis tools in the TSUNAMI codes to estimate the potential impact on the application-specific bias and bias uncertainty resulting from nuclides not represented in available benchmark experiments. This paper presents the application of methods described in a companion paper.« less

  17. 75 FR 763 - Dibenzylidene Sorbitol; Exemption from the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-06

    ... sensitization study in guinea pigs determined that DBS is not a sensitizer. A primary dermal irritation study in...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code...

  18. Use of SCALE Continuous-Energy Monte Carlo Tools for Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    The TSUNAMI code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the development of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The CLUTCH and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in themore » CE KENO framework to generate the capability for TSUNAMI-3D to perform eigenvalue sensitivity calculations in continuous-energy applications. This work explores the improvements in accuracy that can be gained in eigenvalue and eigenvalue sensitivity calculations through the use of the SCALE CE KENO and CE TSUNAMI continuous-energy Monte Carlo tools as compared to multigroup tools. The CE KENO and CE TSUNAMI tools were used to analyze two difficult models of critical benchmarks, and produced eigenvalue and eigenvalue sensitivity coefficient results that showed a marked improvement in accuracy. The CLUTCH sensitivity method in particular excelled in terms of efficiency and computational memory requirements.« less

  19. Manual versus automated coding of free-text self-reported medication data in the 45 and Up Study: a validation study.

    PubMed

    Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily

    2015-03-30

    Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.

  20. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials.

    PubMed

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin; Cui, Tie Jun

    2017-09-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits "0" and "1" to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency-spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments.

  1. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn 011 many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  2. Sensitivity Analysis Applied to Atomic Data Used for X-ray Spectrum Synthesis

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2006-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  3. Surgical Site Infections Following Pediatric Ambulatory Surgery: An Epidemiologic Analysis.

    PubMed

    Rinke, Michael L; Jan, Dominique; Nassim, Janelle; Choi, Jaeun; Choi, Steven J

    2016-08-01

    OBJECTIVE To identify surgical site infection (SSI) rates following pediatric ambulatory surgery, SSI outcomes and risk factors, and sensitivity and specificity of SSI administrative billing codes. DESIGN Retrospective chart review of pediatric ambulatory surgeries with International Classification of Disease, Ninth Revision (ICD-9) codes for SSI, and a systematic random sampling of 5% of surgeries without SSI ICD-9 codes, all adjudicated for SSI on the basis of an ambulatory-adapted National Healthcare Safety Network definition. SETTING Urban pediatric tertiary care center April 1, 2009-March 31, 2014. METHODS SSI rates and sensitivity and specificity of ICD-9 codes were estimated using sampling design, and risk factors were analyzed in case-rest of cohort, and case-control, designs. RESULTS In 15,448 pediatric ambulatory surgeries, 34 patients had ICD-9 codes for SSI and 25 met the adapted National Healthcare Safety Network criteria. One additional SSI was identified with systematic random sampling. The SSI rate following pediatric ambulatory surgery was 2.9 per 1,000 surgeries (95% CI, 1.2-6.9). Otolaryngology surgeries demonstrated significantly lower SSI rates compared with endocrine (P=.001), integumentary (P=.001), male genital (P<.0001), and respiratory (P=.01) surgeries. Almost half of patients with an SSI were admitted, 88% received antibiotics, and 15% returned to the operating room. No risk factors were associated with SSI. The sensitivity of ICD-9 codes for SSI following ambulatory surgery was 55.31% (95% CI, 12.69%-91.33%) and specificity was 99.94% (99.89%-99.97%). CONCLUSIONS SSI following pediatric ambulatory surgery occurs at an appreciable rate and conveys morbidity on children. Infect Control Hosp Epidemiol 2016;37:931-938.

  4. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  5. Initial Results: An Ultra-Low-Background Germanium Crystal Array

    DTIC Science & Technology

    2010-09-01

    data (focused on γ -γ coincidence signatures) (Smith et al., 2004) and the Multi- Isotope Coincidence Analysis code (MICA) (Warren et al., 2006). The...The follow-on “CASCADES” project aims to develop a multicoincidence data- analysis package and make robust fission-product demonstration measurements...sensitivity. This effort is focused on improving gamma analysis capabilities for nuclear detonation detection (NDD) applications, e.g., nuclear treaty

  6. Design oriented structural analysis

    NASA Technical Reports Server (NTRS)

    Giles, Gary L.

    1994-01-01

    Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.

  7. PEBBED Uncertainty and Sensitivity Analysis of the CRP-5 PBMR DLOFC Transient Benchmark with the SUSA Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2011-01-01

    The need for a defendable and systematic uncertainty and sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008. The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This report summarized the results of the initial investigations performed with SUSA,more » utilizing a typical High Temperature Reactor benchmark (the IAEA CRP-5 PBMR 400MW Exercise 2) and the PEBBED-THERMIX suite of codes. The following steps were performed as part of the uncertainty and sensitivity analysis: 1. Eight PEBBED-THERMIX model input parameters were selected for inclusion in the uncertainty study: the total reactor power, inlet gas temperature, decay heat, and the specific heat capability and thermal conductivity of the fuel, pebble bed and reflector graphite. 2. The input parameters variations and probability density functions were specified, and a total of 800 PEBBED-THERMIX model calculations were performed, divided into 4 sets of 100 and 2 sets of 200 Steady State and Depressurized Loss of Forced Cooling (DLOFC) transient calculations each. 3. The steady state and DLOFC maximum fuel temperature, as well as the daily pebble fuel load rate data, were supplied to SUSA as model output parameters of interest. The 6 data sets were statistically analyzed to determine the 5% and 95% percentile values for each of the 3 output parameters with a 95% confidence level, and typical statistical indictors were also generated (e.g. Kendall, Pearson and Spearman coefficients). 4. A SUSA sensitivity study was performed to obtain correlation data between the input and output parameters, and to identify the primary contributors to the output data uncertainties. It was found that the uncertainties in the decay heat, pebble bed and reflector thermal conductivities were responsible for the bulk of the propagated uncertainty in the DLOFC maximum fuel temperature. It was also determined that the two standard deviation (2s) uncertainty on the maximum fuel temperature was between ±58oC (3.6%) and ±76oC (4.7%) on a mean value of 1604 oC. These values mostly depended on the selection of the distributions types, and not on the number of model calculations above the required Wilks criteria (a (95%,95%) statement would usually require 93 model runs).« less

  8. Differentiation of ileostomy from colostomy procedures: assessing the accuracy of current procedural terminology codes and the utility of natural language processing.

    PubMed

    Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K

    2013-08-01

    Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75.2% and 80.9%). NLP performed with greater sensitivity, specificity, and accuracy than CPT codes in identifying stoma procedures and stoma types. Major differences where NLP outperformed CPT included identifying ileostomy (specificity 95.8%, sensitivity 88.3%, and accuracy 91.5%) and colostomy (97.6%, 90.5%, and 92.8%, respectively). CPT codes can identify effectively patients who have had stoma procedures and are adequate in distinguishing between formation and reversal; however, CPT codes cannot differentiate ileostomy from colostomy. NLP can be used to differentiate between ileostomy- and colostomy-related procedures. The role of NLP in conjunction with electronic medical records in data retrieval warrants further investigation. Published by Mosby, Inc.

  9. Color-Coded Audio Computer-Assisted Self-Interviews (C-ACASI) for Poorly Educated Men and Women in a Semi-rural Area of South India: “Good, Scary and Thrilling”

    PubMed Central

    Bhatnagar, Tarun; Brown, Joelle; Saravanamurthy, P. Sakthivel; Kumar, Raju Mohan; Detels, Roger

    2013-01-01

    It is challenging to collect accurate and complete data on sensitive issues such as sexual behaviors. Our objective was to explore experience and perceptions regarding the use of a locally programmed color-coded audio computer-assisted self interview (C-ACASI) system among men and women in a semi-rural setting in south India. We conducted a mixed-methods cross-sectional survey using semi-structured interviews among 89 truck drivers and 101 truck driver wives who had participated earlier in the C-ACASI survey across a predominantly rural district in Tamil Nadu. To assess the color-coded format used, descriptive quantitative analysis was coupled with thematic content analysis of qualitative data. Only 10 % of participants had ever used a computer before. Nearly 75 % did not report any problem in using C-ACASI. The length of the C-ACASI survey was acceptable to 98 % of participants. Overall, 87 % of wives and 73 % of truck drivers stated that C-ACASI was user-friendly and felt comfortable in responding to the sensitive questions. Nearly all (97 %) participants reported that using C-ACASI encouraged them to respond honestly compared to face-to-face personal interviews. Both the drivers and wives expressed that C-ACASI provided confidentiality, privacy, anonymity, and an easy mechanism for responding truthfully to potentially embarrassing questions about their personal sexual relationships. It is feasible and acceptable to use C-ACASI for collecting sensitive data from poorly computer-literate, non-English-speaking, predominantly rural populations of women and men. Our findings support the implementation of effective and culturally sensitive C-ACASI for data collection, albeit with additional validation. PMID:23361948

  10. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1994-01-01

    The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.

  11. Color-coded audio computer-assisted self-interviews (C-ACASI) for poorly educated men and women in a semi-rural area of South India: "good, scary and thrilling".

    PubMed

    Bhatnagar, Tarun; Brown, Joelle; Saravanamurthy, P Sakthivel; Kumar, Raju Mohan; Detels, Roger

    2013-07-01

    It is challenging to collect accurate and complete data on sensitive issues such as sexual behaviors. Our objective was to explore experience and perceptions regarding the use of a locally programmed color-coded audio computer-assisted self interview (C-ACASI) system among men and women in a semi-rural setting in south India. We conducted a mixed-methods cross-sectional survey using semi-structured interviews among 89 truck drivers and 101 truck driver wives who had participated earlier in the C-ACASI survey across a predominantly rural district in Tamil Nadu. To assess the color-coded format used, descriptive quantitative analysis was coupled with thematic content analysis of qualitative data. Only 10% of participants had ever used a computer before. Nearly 75% did not report any problem in using C-ACASI. The length of the C-ACASI survey was acceptable to 98% of participants. Overall, 87% of wives and 73% of truck drivers stated that C-ACASI was user-friendly and felt comfortable in responding to the sensitive questions. Nearly all (97%) participants reported that using C-ACASI encouraged them to respond honestly compared to face-to-face personal interviews. Both the drivers and wives expressed that C-ACASI provided confidentiality, privacy, anonymity, and an easy mechanism for responding truthfully to potentially embarrassing questions about their personal sexual relationships. It is feasible and acceptable to use C-ACASI for collecting sensitive data from poorly computer-literate, non-English-speaking, predominantly rural populations of women and men. Our findings support the implementation of effective and culturally sensitive C-ACASI for data collection, albeit with additional validation.

  12. Efficient sensitivity analysis and optimization of a helicopter rotor

    NASA Technical Reports Server (NTRS)

    Lim, Joon W.; Chopra, Inderjit

    1989-01-01

    Aeroelastic optimization of a system essentially consists of the determination of the optimum values of design variables which minimize the objective function and satisfy certain aeroelastic and geometric constraints. The process of aeroelastic optimization analysis is illustrated. To carry out aeroelastic optimization effectively, one needs a reliable analysis procedure to determine steady response and stability of a rotor system in forward flight. The rotor dynamic analysis used in the present study developed inhouse at the University of Maryland is based on finite elements in space and time. The analysis consists of two major phases: vehicle trim and rotor steady response (coupled trim analysis), and aeroelastic stability of the blade. For a reduction of helicopter vibration, the optimization process requires the sensitivity derivatives of the objective function and aeroelastic stability constraints. For this, the derivatives of steady response, hub loads and blade stability roots are calculated using a direct analytical approach. An automated optimization procedure is developed by coupling the rotor dynamic analysis, design sensitivity analysis and constrained optimization code CONMIN.

  13. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  14. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  15. NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    2009-02-28

    The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less

  16. Computational mechanics analysis tools for parallel-vector supercomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Nguyen, D. T.; Baddourah, M. A.; Qin, J.

    1993-01-01

    Computational algorithms for structural analysis on parallel-vector supercomputers are reviewed. These parallel algorithms, developed by the authors, are for the assembly of structural equations, 'out-of-core' strategies for linear equation solution, massively distributed-memory equation solution, unsymmetric equation solution, general eigen-solution, geometrically nonlinear finite element analysis, design sensitivity analysis for structural dynamics, optimization algorithm and domain decomposition. The source code for many of these algorithms is available from NASA Langley.

  17. Validity of Heart Failure Diagnoses in Administrative Databases: A Systematic Review and Meta-Analysis

    PubMed Central

    McCormick, Natalie; Lacaille, Diane; Bhole, Vidula; Avina-Zubieta, J. Antonio

    2014-01-01

    Objective Heart failure (HF) is an important covariate and outcome in studies of elderly populations and cardiovascular disease cohorts, among others. Administrative data is increasingly being used for long-term clinical research in these populations. We aimed to conduct the first systematic review and meta-analysis of studies reporting on the validity of diagnostic codes for identifying HF in administrative data. Methods MEDLINE and EMBASE were searched (inception to November 2010) for studies: (a) Using administrative data to identify HF; or (b) Evaluating the validity of HF codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value [PPV], negative predictive value, or Kappa scores) for HF, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2011) of original papers. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Using a random-effects model, pooled sensitivity and specificity values were produced, along with estimates of the positive (LR+) and negative (LR−) likelihood ratios, and diagnostic odds ratios (DOR = LR+/LR−) of HF codes. Results Nineteen studies published from1999–2009 were included in the qualitative review. Specificity was ≥95% in all studies and PPV was ≥87% in the majority, but sensitivity was lower (≥69% in ≥50% of studies). In a meta-analysis of the 11 studies reporting sensitivity and specificity values, the pooled sensitivity was 75.3% (95% CI: 74.7–75.9) and specificity was 96.8% (95% CI: 96.8–96.9). The pooled LR+ was 51.9 (20.5–131.6), the LR− was 0.27 (0.20–0.37), and the DOR was 186.5 (96.8–359.2). Conclusions While most HF diagnoses in administrative databases do correspond to true HF cases, about one-quarter of HF cases are not captured. The use of broader search parameters, along with laboratory and prescription medication data, may help identify more cases. PMID:25126761

  18. Convergence Estimates for Multidisciplinary Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Arian, Eyal

    1997-01-01

    A quantitative analysis of coupling between systems of equations is introduced. This analysis is then applied to problems in multidisciplinary analysis, sensitivity, and optimization. For the sensitivity and optimization problems both multidisciplinary and single discipline feasibility schemes are considered. In all these cases a "convergence factor" is estimated in terms of the Jacobians and Hessians of the system, thus it can also be approximated by existing disciplinary analysis and optimization codes. The convergence factor is identified with the measure for the "coupling" between the disciplines in the system. Applications to algorithm development are discussed. Demonstration of the convergence estimates and numerical results are given for a system composed of two non-linear algebraic equations, and for a system composed of two PDEs modeling aeroelasticity.

  19. Controlling Energy Radiations of Electromagnetic Waves via Frequency Coding Metamaterials

    PubMed Central

    Wu, Haotian; Liu, Shuo; Wan, Xiang; Zhang, Lei; Wang, Dan; Li, Lianlin

    2017-01-01

    Metamaterials are artificial structures composed of subwavelength unit cells to control electromagnetic (EM) waves. The spatial coding representation of metamaterial has the ability to describe the material in a digital way. The spatial coding metamaterials are typically constructed by unit cells that have similar shapes with fixed functionality. Here, the concept of frequency coding metamaterial is proposed, which achieves different controls of EM energy radiations with a fixed spatial coding pattern when the frequency changes. In this case, not only different phase responses of the unit cells are considered, but also different phase sensitivities are also required. Due to different frequency sensitivities of unit cells, two units with the same phase response at the initial frequency may have different phase responses at higher frequency. To describe the frequency coding property of unit cell, digitalized frequency sensitivity is proposed, in which the units are encoded with digits “0” and “1” to represent the low and high phase sensitivities, respectively. By this merit, two degrees of freedom, spatial coding and frequency coding, are obtained to control the EM energy radiations by a new class of frequency‐spatial coding metamaterials. The above concepts and physical phenomena are confirmed by numerical simulations and experiments. PMID:28932671

  20. Ascertainment of Outpatient Visits by Patients with Diabetes: The National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS)

    PubMed Central

    Asao, Keiko; McEwen, Laura N.; Lee, Joyce M.; Herman, William H.

    2015-01-01

    Aims To estimate and evaluate the sensitivity and specificity of providers’ diagnosis codes and medication lists to identify outpatient visits by patients with diabetes. Methods We used data from the 2006 to 2010 National Ambulatory Medical Care Survey and National Hospital Ambulatory Medical Care Survey. We assessed the sensitivity and specificity of providers’ diagnoses and medication lists to identify patients with diabetes, using the checkbox for diabetes as the gold standard. We then examined differences in sensitivity by patients’ characteristics using multivariate logistic regression models. Results The checkbox identified 12,647 outpatient visits by adults with diabetes among the 70,352 visits used for this analysis. The sensitivity and specificity of providers’ diagnoses or listed diabetes medications were 72.3% (95% CI: 70.8% to 73.8%) and 99.2% (99.1% to 99.4%), respectively. Diabetic patients ≥75 years pf age, women, non-Hispanics, and those with private insurance or Medicare were more likely to be missed by providers’ diagnoses and medication lists. Diabetic patients who had more diagnosis codes and medications recorded, had glucose or hemoglobin A1c measured, or made office- rather than hospital-outpatient visits were less likely to be missed. Conclusions Providers’ diagnosis codes and medication lists fail to identify approximately one quarter of outpatient visits by patients with diabetes. PMID:25891975

  1. Benchmarking NNWSI flow and transport codes: COVE 1 results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hayden, N.K.

    1985-06-01

    The code verification (COVE) activity of the Nevada Nuclear Waste Storage Investigations (NNWSI) Project is the first step in certification of flow and transport codes used for NNWSI performance assessments of a geologic repository for disposing of high-level radioactive wastes. The goals of the COVE activity are (1) to demonstrate and compare the numerical accuracy and sensitivity of certain codes, (2) to identify and resolve problems in running typical NNWSI performance assessment calculations, and (3) to evaluate computer requirements for running the codes. This report describes the work done for COVE 1, the first step in benchmarking some of themore » codes. Isothermal calculations for the COVE 1 benchmarking have been completed using the hydrologic flow codes SAGUARO, TRUST, and GWVIP; the radionuclide transport codes FEMTRAN and TRUMP; and the coupled flow and transport code TRACR3D. This report presents the results of three cases of the benchmarking problem solved for COVE 1, a comparison of the results, questions raised regarding sensitivities to modeling techniques, and conclusions drawn regarding the status and numerical sensitivities of the codes. 30 refs.« less

  2. 76 FR 55268 - Chromobacterium subtsugae Strain PRAA4-1T

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-07

    ... (irritation symptoms cleared by 24 hours; Toxicity Category IV). 9. Dermal sensitization--guinea pig... that Chromobacterium subtsugae strain PRAA4-1\\T\\ was not a dermal sensitizer to guinea pigs. IV... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311...

  3. Administrative database code accuracy did not vary notably with changes in disease prevalence.

    PubMed

    van Walraven, Carl; English, Shane; Austin, Peter C

    2016-11-01

    Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  5. An empirical comparison of a dynamic software testability metric to static cyclomatic complexity

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey M.; Miller, Keith W.; Payne, Jeffrey E.

    1993-01-01

    This paper compares the dynamic testability prediction technique termed 'sensitivity analysis' to the static testability technique termed cyclomatic complexity. The application that we chose in this empirical study is a CASE generated version of a B-737 autoland system. For the B-737 system we analyzed, we isolated those functions that we predict are more prone to hide errors during system/reliability testing. We also analyzed the code with several other well-known static metrics. This paper compares and contrasts the results of sensitivity analysis to the results of the static metrics.

  6. Sensitivity of the Boundary Plasma to the Plasma-Material Interface

    DOE PAGES

    Canik, John M.; Tang, X. -Z.

    2017-01-01

    While the sensitivity of the scrape-off layer and divertor plasma to the highly uncertain cross-field transport assumptions is widely recognized, the plasma is also sensitive to the details of the plasma-material interface (PMI) models used as part of comprehensive predictive simulations. Here in this paper, these PMI sensitivities are studied by varying the relevant sub-models within the SOLPS plasma transport code. Two aspects are explored: the sheath model used as a boundary condition in SOLPS, and fast particle reflection rates for ions impinging on a material surface. Both of these have been the study of recent high-fidelity simulation efforts aimedmore » at improving the understanding and prediction of these phenomena. It is found that in both cases quantitative changes to the plasma solution result from modification of the PMI model, with a larger impact in the case of the reflection coefficient variation. Finally, this indicates the necessity to better quantify the uncertainties within the PMI models themselves, and perform thorough sensitivity analysis to propagate these throughout the boundary model; this is especially important for validation against experiment, where the error in the simulation is a critical and less-studied piece of the code-experiment comparison.« less

  7. UCODE_2005 and six other computer codes for universal sensitivity analysis, calibration, and uncertainty evaluation constructed using the JUPITER API

    USGS Publications Warehouse

    Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen

    2006-01-01

    This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con

  8. 78 FR 35143 - 1,3-Propanediol; Exemptions From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-12

    ... rabbits. Dermal sensitization studies on guinea pigs showed that 1,3-propanediol is not a sensitizer. In a... whether this document applies to them. Potentially affected entities may include: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  10. Identification of Trends into Dose Calculations for Astronauts through Performing Sensitivity Analysis on Calculational Models Used by the Radiation Health Office

    NASA Technical Reports Server (NTRS)

    Adams, Thomas; VanBaalen, Mary

    2009-01-01

    The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.

  11. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    NASA Astrophysics Data System (ADS)

    Hartini, Entin; Andiwijayakusuma, Dinan

    2014-09-01

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.

  12. Development code for sensitivity and uncertainty analysis of input on the MCNPX for neutronic calculation in PWR core

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id

    2014-09-30

    This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less

  13. Engineering Overview of a Multidisciplinary HSCT Design Framework Using Medium-Fidelity Analysis Codes

    NASA Technical Reports Server (NTRS)

    Weston, R. P.; Green, L. L.; Salas, A. O.; Samareh, J. A.; Townsend, J. C.; Walsh, J. L.

    1999-01-01

    An objective of the HPCC Program at NASA Langley has been to promote the use of advanced computing techniques to more rapidly solve the problem of multidisciplinary optimization of a supersonic transport configuration. As a result, a software system has been designed and is being implemented to integrate a set of existing discipline analysis codes, some of them CPU-intensive, into a distributed computational framework for the design of a High Speed Civil Transport (HSCT) configuration. The proposed paper will describe the engineering aspects of integrating these analysis codes and additional interface codes into an automated design system. The objective of the design problem is to optimize the aircraft weight for given mission conditions, range, and payload requirements, subject to aerodynamic, structural, and performance constraints. The design variables include both thicknesses of structural elements and geometric parameters that define the external aircraft shape. An optimization model has been adopted that uses the multidisciplinary analysis results and the derivatives of the solution with respect to the design variables to formulate a linearized model that provides input to the CONMIN optimization code, which outputs new values for the design variables. The analysis process begins by deriving the updated geometries and grids from the baseline geometries and grids using the new values for the design variables. This free-form deformation approach provides internal FEM (finite element method) grids that are consistent with aerodynamic surface grids. The next step involves using the derived FEM and section properties in a weights process to calculate detailed weights and the center of gravity location for specified flight conditions. The weights process computes the as-built weight, weight distribution, and weight sensitivities for given aircraft configurations at various mass cases. Currently, two mass cases are considered: cruise and gross take-off weight (GTOW). Weights information is obtained from correlations of data from three sources: 1) as-built initial structural and non-structural weights from an existing database, 2) theoretical FEM structural weights and sensitivities from Genesis, and 3) empirical as-built weight increments, non-structural weights, and weight sensitivities from FLOPS. For the aeroelastic analysis, a variable-fidelity aerodynamic analysis has been adopted. This approach uses infrequent CPU-intensive non-linear CFD to calculate a non-linear correction relative to a linear aero calculation for the same aerodynamic surface at an angle of attack that results in the same configuration lift. For efficiency, this nonlinear correction is applied after each subsequent linear aero solution during the iterations between the aerodynamic and structural analyses. Convergence is achieved when the vehicle shape being used for the aerodynamic calculations is consistent with the structural deformations caused by the aerodynamic loads. To make the structural analyses more efficient, a linearized structural deformation model has been adopted, in which a single stiffness matrix can be used to solve for the deformations under all the load conditions. Using the converged aerodynamic loads, a final set of structural analyses are performed to determine the stress distributions and the buckling conditions for constraint calculation. Performance constraints are obtained by running FLOPS using drag polars that are computed using results from non-linear corrections to the linear aero code plus several codes to provide drag increments due to skin friction, wave drag, and other miscellaneous drag contributions. The status of the integration effort will be presented in the proposed paper, and results will be provided that illustrate the degree of accuracy in the linearizations that have been employed.

  14. FY17 Status Report on the Initial Development of a Constitutive Model for Grade 91 Steel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messner, M. C.; Phan, V. -T.; Sham, T. -L.

    Grade 91 is a candidate structural material for high temperature advanced reactor applications. Existing ASME Section III, Subsection HB, Subpart B simplified design rules based on elastic analysis are setup as conservative screening tools with the intent to supplement these screening rules with full inelastic analysis when required. The Code provides general guidelines for suitable inelastic models but does not provide constitutive model implementations. This report describes the development of an inelastic constitutive model for Gr. 91 steel aimed at fulfilling the ASME Code requirements and being included into a new Section III Code appendix, HBB-Z. A large database ofmore » over 300 experiments on Gr. 91 was collected and converted to a standard XML form. Five families of Gr. 91 material models were identified in the literature. Of these five, two are potentially suitable for use in the ASME code. These two models were implemented and evaluated against the experimental database. Both models have deficiencies so the report develops a framework for developing and calibrating an improved model. This required creating a new modeling method for representing changes in material rate sensitivity across the full ASME allowable temperature range for Gr. 91 structural components: room temperature to 650° C. On top of this framework for rate sensitivity the report describes calibrating a model for work hardening and softening in the material using genetic algorithm optimization. Future work will focus on improving this trial model by including tension/compression asymmetry observed in experiments and necessary to capture material ratcheting under zero mean stress and by improving the optimization and analysis framework.« less

  15. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2003-01-01

    An efficient incremental iterative approach for differentiating advanced flow codes is successfully demonstrated on a two-dimensional inviscid model problem. The method employs the reverse-mode capability of the automatic differentiation software tool ADIFOR 3.0 and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straightforward, black-box reverse-mode applicaiton of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-rder aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoinct) procedures; then, a very efficient noniterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hesian matrices) of lift, wave drag, and pitching-moment coefficients are calculated with respect to geometric shape, angle of attack, and freestream Mach number.

  16. Some Advanced Concepts in Discrete Aerodynamic Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Green, Lawrence L.; Newman, Perry A.; Putko, Michele M.

    2001-01-01

    An efficient incremental-iterative approach for differentiating advanced flow codes is successfully demonstrated on a 2D inviscid model problem. The method employs the reverse-mode capability of the automatic- differentiation software tool ADIFOR 3.0, and is proven to yield accurate first-order aerodynamic sensitivity derivatives. A substantial reduction in CPU time and computer memory is demonstrated in comparison with results from a straight-forward, black-box reverse- mode application of ADIFOR 3.0 to the same flow code. An ADIFOR-assisted procedure for accurate second-order aerodynamic sensitivity derivatives is successfully verified on an inviscid transonic lifting airfoil example problem. The method requires that first-order derivatives are calculated first using both the forward (direct) and reverse (adjoint) procedures; then, a very efficient non-iterative calculation of all second-order derivatives can be accomplished. Accurate second derivatives (i.e., the complete Hessian matrices) of lift, wave-drag, and pitching-moment coefficients are calculated with respect to geometric- shape, angle-of-attack, and freestream Mach number

  17. Assessment of uncertainties of the models used in thermal-hydraulic computer codes

    NASA Astrophysics Data System (ADS)

    Gricay, A. S.; Migrov, Yu. A.

    2015-09-01

    The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.

  18. Critical evaluation of Jet-A spray combustion using propane chemical kinetics in gas turbine combustion simulated by KIVA-2

    NASA Technical Reports Server (NTRS)

    Nguyen, H. L.; Ying, S.-J.

    1990-01-01

    Jet-A spray combustion has been evaluated in gas turbine combustion with the use of propane chemical kinetics as the first approximation for the chemical reactions. Here, the numerical solutions are obtained by using the KIVA-2 computer code. The KIVA-2 code is the most developed of the available multidimensional combustion computer programs for application of the in-cylinder combustion dynamics of internal combustion engines. The released version of KIVA-2 assumes that 12 chemical species are present; the code uses an Arrhenius kinetic-controlled combustion model governed by a four-step global chemical reaction and six equilibrium reactions. Researchers efforts involve the addition of Jet-A thermophysical properties and the implementation of detailed reaction mechanisms for propane oxidation. Three different detailed reaction mechanism models are considered. The first model consists of 131 reactions and 45 species. This is considered as the full mechanism which is developed through the study of chemical kinetics of propane combustion in an enclosed chamber. The full mechanism is evaluated by comparing calculated ignition delay times with available shock tube data. However, these detailed reactions occupy too much computer memory and CPU time for the computation. Therefore, it only serves as a benchmark case by which to evaluate other simplified models. Two possible simplified models were tested in the existing computer code KIVA-2 for the same conditions as used with the full mechanism. One model is obtained through a sensitivity analysis using LSENS, the general kinetics and sensitivity analysis program code of D. A. Bittker and K. Radhakrishnan. This model consists of 45 chemical reactions and 27 species. The other model is based on the work published by C. K. Westbrook and F. L. Dryer.

  19. Transferability and within- and between-laboratory reproducibilities of EpiSensA for predicting skin sensitization potential in vitro: A ring study in three laboratories.

    PubMed

    Mizumachi, Hideyuki; Sakuma, Megumi; Ikezumi, Mayu; Saito, Kazutoshi; Takeyoshi, Midori; Imai, Noriyasu; Okutomi, Hiroko; Umetsu, Asami; Motohashi, Hiroko; Watanabe, Mika; Miyazawa, Masaaki

    2018-05-03

    The epidermal sensitization assay (EpiSensA) is an in vitro skin sensitization test method based on gene expression of four markers related to the induction of skin sensitization; the assay uses commercially available reconstructed human epidermis. EpiSensA has exhibited an accuracy of 90% for 72 chemicals, including lipophilic chemicals and pre-/pro-haptens, when compared with the results of the murine local lymph node assay. In this work, a ring study was performed by one lead and two naive laboratories to evaluate the transferability, as well as within- and between-laboratory reproducibilities, of EpiSensA. Three non-coded chemicals (two lipophilic sensitizers and one non-sensitizer) were tested for the assessment of transferability and 10 coded chemicals (seven sensitizers and three non-sensitizers, including four lipophilic chemicals) were tested for the assessment of reproducibility. In the transferability phase, the non-coded chemicals (two sensitizers and one non-sensitizer) were correctly classified at the two naive laboratories, indicating that the EpiSensA protocol was transferred successfully. For the within-laboratory reproducibility, the data generated with three coded chemicals tested in three independent experiments in each laboratory gave consistent predictions within laboratories. For the between-laboratory reproducibility, 9 of the 10 coded chemicals tested once in each laboratory provided consistent predictions among the three laboratories. These results suggested that EpiSensA has good transferability, as well as within- and between-laboratory reproducibility. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Coupled reactors analysis: New needs and advances using Monte Carlo methodology

    DOE PAGES

    Aufiero, M.; Palmiotti, G.; Salvatores, M.; ...

    2016-08-20

    Coupled reactors and the coupling features of large or heterogeneous core reactors can be investigated with the Avery theory that allows a physics understanding of the main features of these systems. However, the complex geometries that are often encountered in association with coupled reactors, require a detailed geometry description that can be easily provided by modern Monte Carlo (MC) codes. This implies a MC calculation of the coupling parameters defined by Avery and of the sensitivity coefficients that allow further detailed physics analysis. The results presented in this paper show that the MC code SERPENT has been successfully modifed tomore » meet the required capabilities.« less

  1. Application of the JENDL-4.0 nuclear data set for uncertainty analysis of the prototype FBR Monju

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with uncertainty analysis of the Monju reactor using JENDL-4.0 and the ERANOS code 1. In 2010 the Japan Atomic Energy Agency - JAEA - released the JENDL-4.0 nuclear data set. This new evaluation contains improved values of cross-sections and emphasizes accurate covariance matrices. Also in 2010, JAEA restarted the sodium-cooled fast reactor prototype Monju after about 15 years of shutdown. The long shutdown time resulted in a build-up of {sup 241}Am by natural decay from the initially loaded Pu. As well as improved covariance matrices, JENDL-4.0 is announced to contain improved data for minor actinides 2. Themore » choice of Monju reactor as an application of the new evaluation seems then even more relevant. The uncertainty analysis requires the determination of sensitivity coefficients. The well-established ERANOS code was chosen because of its integrated modules that allow users to perform sensitivity and uncertainty analysis. A JENDL-4.0 cross-sections library is not available for ERANOS. Therefor a cross-sections library had to be made from the original ENDF files for the ECCO cell code (part of ERANOS). For confirmation of the newly made library, calculations of a benchmark core were performed. These calculations used the MZA and MZB benchmarks and showed consistent results with other libraries. Calculations for the Monju reactor were performed using hexagonal 3D geometry and PN transport theory. However, the ERANOS sensitivity modules cannot use the resulting fluxes, as these modules require finite differences based fluxes, obtained from RZ SN-transport or 3D diffusion calculations. The corresponding geometrical models have been made and the results verified with Monju restart experimental data 4. Uncertainty analysis was performed using the RZ model. JENDL-4.0 uncertainty analysis showed a significant reduction of the uncertainty related to the fission cross-section of Pu along with an increase of the uncertainty related to the capture cross-section of {sup 238}U compared with the previous JENDL-3.3 version. Covariance data recently added in JENDL-4.0 for {sup 241}Am appears to have a non-negligible contribution. (authors)« less

  2. Examining the relationship between comprehension and production processes in code-switched language

    PubMed Central

    Guzzardo Tamargo, Rosa E.; Valdés Kroff, Jorge R.; Dussias, Paola E.

    2016-01-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish–English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants’ comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension. PMID:28670049

  3. Examining the relationship between comprehension and production processes in code-switched language.

    PubMed

    Guzzardo Tamargo, Rosa E; Valdés Kroff, Jorge R; Dussias, Paola E

    2016-08-01

    We employ code-switching (the alternation of two languages in bilingual communication) to test the hypothesis, derived from experience-based models of processing (e.g., Boland, Tanenhaus, Carlson, & Garnsey, 1989; Gennari & MacDonald, 2009), that bilinguals are sensitive to the combinatorial distributional patterns derived from production and that they use this information to guide processing during the comprehension of code-switched sentences. An analysis of spontaneous bilingual speech confirmed the existence of production asymmetries involving two auxiliary + participle phrases in Spanish-English code-switches. A subsequent eye-tracking study with two groups of bilingual code-switchers examined the consequences of the differences in distributional patterns found in the corpus study for comprehension. Participants' comprehension costs mirrored the production patterns found in the corpus study. Findings are discussed in terms of the constraints that may be responsible for the distributional patterns in code-switching production and are situated within recent proposals of the links between production and comprehension.

  4. Comprehensive analysis of transport aircraft flight performance

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2008-04-01

    This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.

  5. Sensitivity of Claims-Based Algorithms to Ascertain Smoking Status More Than Doubled with Meaningful Use.

    PubMed

    Huo, Jinhai; Yang, Ming; Tina Shih, Ya-Chen

    2018-03-01

    The "meaningful use of certified electronic health record" policy requires eligible professionals to record smoking status for more than 50% of all individuals aged 13 years or older in 2011 to 2012. To explore whether the coding to document smoking behavior has increased over time and to assess the accuracy of smoking-related diagnosis and procedure codes in identifying previous and current smokers. We conducted an observational study with 5,423,880 enrollees from the year 2009 to 2014 in the Truven Health Analytics database. Temporal trends of smoking coding, sensitivity, specificity, positive predictive value, and negative predictive value were measured. The rate of coding of smoking behavior improved significantly by the end of the study period. The proportion of patients in the claims data recorded as current smokers increased 2.3-fold and the proportion of patients recorded as previous smokers increased 4-fold during the 6-year period. The sensitivity of each International Classification of Diseases, Ninth Revision, Clinical Modification code was generally less than 10%. The diagnosis code of tobacco use disorder (305.1X) was the most sensitive code (9.3%) for identifying smokers. The specificities of these codes and the Current Procedural Terminology codes were all more than 98%. A large improvement in the coding of current and previous smoking behavior has occurred since the inception of the meaningful use policy. Nevertheless, the use of diagnosis and procedure codes to identify smoking behavior in administrative data is still unreliable. This suggests that quality improvements toward medical coding on smoking behavior are needed to enhance the capability of claims data for smoking-related outcomes research. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Probabilistic boundary element method

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Raveendra, S. T.

    1989-01-01

    The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.

  7. Methodology, status and plans for development and assessment of Cathare code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bestion, D.; Barre, F.; Faydide, B.

    1997-07-01

    This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less

  8. Sensitivity of Combustion-Acoustic Instabilities to Boundary Conditions for Premixed Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo

    1995-01-01

    Premixed combustors, which are being considered for low NOx engines, are susceptible to instabilities due to feedback between pressure perturbations and combustion. This feedback can cause damaging mechanical vibrations of the system as well as degrade the emissions characteristics and combustion efficiency. In a lean combustor instabilities can also lead to blowout. A model was developed to perform linear combustion-acoustic stability analysis using detailed chemical kinetic mechanisms. The Lewis Kinetics and Sensitivity Analysis Code, LSENS, was used to calculate the sensitivities of the heat release rate to perturbations in density and temperature. In the present work, an assumption was made that the mean flow velocity was small relative to the speed of sound. Results of this model showed the regions of growth of perturbations to be most sensitive to the reflectivity of the boundary when reflectivities were close to unity.

  9. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  10. Continuous-energy eigenvalue sensitivity coefficient calculations in TSUNAMI-3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, C. M.; Rearden, B. T.

    2013-07-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several test problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and a low memory footprint, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations. (authors)

  11. Development of a SCALE Tool for Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2013-01-01

    Two methods for calculating eigenvalue sensitivity coefficients in continuous-energy Monte Carlo applications were implemented in the KENO code within the SCALE code package. The methods were used to calculate sensitivity coefficients for several criticality safety problems and produced sensitivity coefficients that agreed well with both reference sensitivities and multigroup TSUNAMI-3D sensitivity coefficients. The newly developed CLUTCH method was observed to produce sensitivity coefficients with high figures of merit and low memory requirements, and both continuous-energy sensitivity methods met or exceeded the accuracy of the multigroup TSUNAMI-3D calculations.

  12. An easily implemented static condensation method for structural sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gangadharan, S. N.; Haftka, R. T.; Nikolaidis, E.

    1990-01-01

    A black-box approach to static condensation for sensitivity analysis is presented with illustrative examples of a cube and a car structure. The sensitivity of the structural response with respect to joint stiffness parameter is calculated using the direct method, forward-difference, and central-difference schemes. The efficiency of the various methods for identifying joint stiffness parameters from measured static deflections of these structures is compared. The results indicate that the use of static condensation can reduce computation times significantly and the black-box approach is only slightly less efficient than the standard implementation of static condensation. The ease of implementation of the black-box approach recommends it for use with general-purpose finite element codes that do not have a built-in facility for static condensation.

  13. CFD Variability for a Civil Transport Aircraft Near Buffet-Onset Conditions

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Morrison, Joseph H.; Biedron, Robert T.

    2003-01-01

    A CFD sensitivity analysis is conducted for an aircraft at several conditions, including flow with substantial separation (buffet onset). The sensitivity is studied using two different Navier-Stokes computer codes, three different turbulence models, and two different grid treatments of the wing trailing edge. This effort is a follow-on to an earlier study of CFD variation over a different aircraft in buffet onset conditions. Similar to the earlier study, the turbulence model is found to have the largest effect, with a variation of 3.8% in lift at the buffet onset angle of attack. Drag and moment variation are 2.9% and 23.6%, respectively. The variations due to code and trailing edge cap grid are smaller than that due to turbulence model. Overall, the combined approximate error band in CFD due to code, turbulence model, and trailing edge treatment at the buffet onset angle of attack are: 4% in lift, 3% in drag, and 31% in moment. The CFD results show similar trends to flight test data, but also exhibit a lift curve break not seen in the data.

  14. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  15. Analysis of JT-60SA operational scenarios

    NASA Astrophysics Data System (ADS)

    Garzotti, L.; Barbato, E.; Garcia, J.; Hayashi, N.; Voitsekhovitch, I.; Giruzzi, G.; Maget, P.; Romanelli, M.; Saarelma, S.; Stankiewitz, R.; Yoshida, M.; Zagórski, R.

    2018-02-01

    Reference scenarios for the JT-60SA tokamak have been simulated with one-dimensional transport codes to assess the stationary state of the flat-top phase and provide a profile database for further physics studies (e.g. MHD stability, gyrokinetic analysis) and diagnostics design. The types of scenario considered vary from pulsed standard H-mode to advanced non-inductive steady-state plasmas. In this paper we present the results obtained with the ASTRA, CRONOS, JINTRAC and TOPICS codes equipped with the Bohm/gyro-Bohm, CDBM and GLF23 transport models. The scenarios analysed here are: a standard ELMy H-mode, a hybrid scenario and a non-inductive steady state plasma, with operational parameters from the JT-60SA research plan. Several simulations of the scenarios under consideration have been performed with the above mentioned codes and transport models. The results from the different codes are in broad agreement and the main plasma parameters generally agree well with the zero dimensional estimates reported previously. The sensitivity of the results to different transport models and, in some cases, to the ELM/pedestal model has been investigated.

  16. Utilization of an agility assessment module in analysis and optimization of preliminary fighter configuration

    NASA Technical Reports Server (NTRS)

    Ngan, Angelen; Biezad, Daniel

    1996-01-01

    A study has been conducted to develop and to analyze a FORTRAN computer code for performing agility analysis on fighter aircraft configurations. This program is one of the modules of the NASA Ames ACSYNT (AirCraft SYNThesis) design code. The background of the agility research in the aircraft industry and a survey of a few agility metrics are discussed. The methodology, techniques, and models developed for the code are presented. The validity of the existing code was evaluated by comparing with existing flight test data. A FORTRAN program was developed for a specific metric, PM (Pointing Margin), as part of the agility module. Example trade studies using the agility module along with ACSYNT were conducted using a McDonnell Douglas F/A-18 Hornet aircraft model. Tile sensitivity of thrust loading, wing loading, and thrust vectoring on agility criteria were investigated. The module can compare the agility potential between different configurations and has capability to optimize agility performance in the preliminary design process. This research provides a new and useful design tool for analyzing fighter performance during air combat engagements in the preliminary design.

  17. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  18. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  19. An Initial Study of the Sensitivity of Aircraft Vortex Spacing System (AVOSS) Spacing Sensitivity to Weather and Configuration Input Parameters

    NASA Technical Reports Server (NTRS)

    Riddick, Stephen E.; Hinton, David A.

    2000-01-01

    A study has been performed on a computer code modeling an aircraft wake vortex spacing system during final approach. This code represents an initial engineering model of a system to calculate reduced approach separation criteria needed to increase airport productivity. This report evaluates model sensitivity toward various weather conditions (crosswind, crosswind variance, turbulent kinetic energy, and thermal gradient), code configurations (approach corridor option, and wake demise definition), and post-processing techniques (rounding of provided spacing values, and controller time variance).

  20. SENSITIVITY OF BLIND PULSAR SEARCHES WITH THE FERMI LARGE AREA TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dormody, M.; Johnson, R. P.; Atwood, W. B.

    2011-12-01

    We quantitatively establish the sensitivity to the detection of young to middle-aged, isolated, gamma-ray pulsars through blind searches of Fermi Large Area Telescope (LAT) data using a Monte Carlo simulation. We detail a sensitivity study of the time-differencing blind search code used to discover gamma-ray pulsars in the first year of observations. We simulate 10,000 pulsars across a broad parameter space and distribute them across the sky. We replicate the analysis in the Fermi LAT First Source Catalog to localize the sources, and the blind search analysis to find the pulsars. We analyze the results and discuss the effect ofmore » positional error and spin frequency on gamma-ray pulsar detections. Finally, we construct a formula to determine the sensitivity of the blind search and present a sensitivity map assuming a standard set of pulsar parameters. The results of this study can be applied to population studies and are useful in characterizing unidentified LAT sources.« less

  1. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  2. Development of an Automatic Differentiation Version of the FPX Rotor Code

    NASA Technical Reports Server (NTRS)

    Hu, Hong

    1996-01-01

    The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.

  3. Design enhancement tools in MSC/NASTRAN

    NASA Technical Reports Server (NTRS)

    Wallerstein, D. V.

    1984-01-01

    Design sensitivity is the calculation of derivatives of constraint functions with respect to design variables. While a knowledge of these derivatives is useful in its own right, the derivatives are required in many efficient optimization methods. Constraint derivatives are also required in some reanalysis methods. It is shown where the sensitivity coefficients fit into the scheme of a basic organization of an optimization procedure. The analyzer is to be taken as MSC/NASTRAN. The terminator program monitors the termination criteria and ends the optimization procedure when the criteria are satisfied. This program can reside in several plances: in the optimizer itself, in a user written code, or as part of the MSC/EOS (Engineering Operating System) MSC/EOS currently under development. Since several excellent optimization codes exist and since they require such very specialized technical knowledge, the optimizer under the new MSC/EOS is considered to be selected and supplied by the user to meet his specific needs and preferences. The one exception to this is a fully stressed design (FSD) based on simple scaling. The gradients are currently supplied by various design sensitivity options now existing in MSC/NASTRAN's design sensitivity analysis (DSA).

  4. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data

    PubMed Central

    2011-01-01

    Background Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Methods Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. Results For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Conclusions Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system. PMID:21849089

  5. Improved accuracy of co-morbidity coding over time after the introduction of ICD-10 administrative data.

    PubMed

    Januel, Jean-Marie; Luthi, Jean-Christophe; Quan, Hude; Borst, François; Taffé, Patrick; Ghali, William A; Burnand, Bernard

    2011-08-18

    Co-morbidity information derived from administrative data needs to be validated to allow its regular use. We assessed evolution in the accuracy of coding for Charlson and Elixhauser co-morbidities at three time points over a 5-year period, following the introduction of the International Classification of Diseases, 10th Revision (ICD-10), coding of hospital discharges. Cross-sectional time trend evaluation study of coding accuracy using hospital chart data of 3'499 randomly selected patients who were discharged in 1999, 2001 and 2003, from two teaching and one non-teaching hospital in Switzerland. We measured sensitivity, positive predictive and Kappa values for agreement between administrative data coded with ICD-10 and chart data as the 'reference standard' for recording 36 co-morbidities. For the 17 the Charlson co-morbidities, the sensitivity - median (min-max) - was 36.5% (17.4-64.1) in 1999, 42.5% (22.2-64.6) in 2001 and 42.8% (8.4-75.6) in 2003. For the 29 Elixhauser co-morbidities, the sensitivity was 34.2% (1.9-64.1) in 1999, 38.6% (10.5-66.5) in 2001 and 41.6% (5.1-76.5) in 2003. Between 1999 and 2003, sensitivity estimates increased for 30 co-morbidities and decreased for 6 co-morbidities. The increase in sensitivities was statistically significant for six conditions and the decrease significant for one. Kappa values were increased for 29 co-morbidities and decreased for seven. Accuracy of administrative data in recording clinical conditions improved slightly between 1999 and 2003. These findings are of relevance to all jurisdictions introducing new coding systems, because they demonstrate a phenomenon of improved administrative data accuracy that may relate to a coding 'learning curve' with the new coding system.

  6. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  7. Sensitivity Analysis of Cf-252 (sf) Neutron and Gamma Observables in CGMF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, Austin Lewis; Talou, Patrick; Stetcu, Ionel

    CGMF is a Monte Carlo code that simulates the decay of primary fission fragments by emission of neutrons and gamma rays, according to the Hauser-Feshbach equations. As the CGMF code was recently integrated into the MCNP6.2 transport code, great emphasis has been placed on providing optimal parameters to CGMF such that many different observables are accurately represented. Of these observables, the prompt neutron spectrum, prompt neutron multiplicity, prompt gamma spectrum, and prompt gamma multiplicity are crucial for accurate transport simulations of criticality and nonproliferation applications. This contribution to the ongoing efforts to improve CGMF presents a study of the sensitivitymore » of various neutron and gamma observables to several input parameters for Californium-252 spontaneous fission. Among the most influential parameters are those that affect the input yield distributions in fragment mass and total kinetic energy (TKE). A new scheme for representing Y(A,TKE) was implemented in CGMF using three fission modes, S1, S2 and SL. The sensitivity profiles were calculated for 17 total parameters, which show that the neutron multiplicity distribution is strongly affected by the TKE distribution of the fragments. The total excitation energy (TXE) of the fragments is shared according to a parameter RT, which is defined as the ratio of the light to heavy initial temperatures. The sensitivity profile of the neutron multiplicity shows a second order effect of RT on the mean neutron multiplicity. A final sensitivity profile was produced for the parameter alpha, which affects the spin of the fragments. Higher values of alpha lead to higher fragment spins, which inhibit the emission of neutrons. Understanding the sensitivity of the prompt neutron and gamma observables to the many CGMF input parameters provides a platform for the optimization of these parameters.« less

  8. Assessment of algorithms to identify patients with thrombophilia following venous thromboembolism.

    PubMed

    Delate, Thomas; Hsiao, Wendy; Kim, Benjamin; Witt, Daniel M; Meyer, Melissa R; Go, Alan S; Fang, Margaret C

    2016-01-01

    Routine testing for thrombophilia following venous thromboembolism (VTE) is controversial. The use of large datasets to study the clinical impact of thrombophilia testing on patterns of care and patient outcomes may enable more efficient analysis of this practice in a wide range of settings. We set out to examine how accurately algorithms using International Classification of Diseases 9th Revision (ICD-9) codes and/or pharmacy data reflect laboratory-confirmed thrombophilia diagnoses. A random sample of adult Kaiser Permanente Colorado patients diagnosed with unprovoked VTE between 1/2004 and 12/2010 underwent medical record abstraction of thrombophilia test results. Algorithms using "ICD-9" (positive if a thrombophilia ICD-9 code was present), "Extended anticoagulation (AC)" (positive if AC therapy duration was >6 months), and "ICD-9 & Extended AC" (positive for both) criteria to identify possible thrombophilia cases were tested. Using positive thrombophilia laboratory results as the gold standard, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value of each algorithm were calculated, along with 95% confidence intervals (CIs). In our cohort of 636 patients, sensitivities were low (<50%) for each algorithm. "ICD-9" yielded the highest PPV (41.5%, 95% CI 26.3-57.9%) and a high specificity (95.9%, 95% CI 94.0-97.4%). "Extended AC" had the highest sensitivity but lowest specificity, and "ICD-9 & Extended AC" had the highest specificity but lowest sensitivity. ICD-9 codes for thrombophilia are highly specific for laboratory-confirmed cases, but all algorithms had low sensitivities. Further development of methods to identify thrombophilia patients in large datasets is warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. System statistical reliability model and analysis

    NASA Technical Reports Server (NTRS)

    Lekach, V. S.; Rood, H.

    1973-01-01

    A digital computer code was developed to simulate the time-dependent behavior of the 5-kwe reactor thermoelectric system. The code was used to determine lifetime sensitivity coefficients for a number of system design parameters, such as thermoelectric module efficiency and degradation rate, radiator absorptivity and emissivity, fuel element barrier defect constant, beginning-of-life reactivity, etc. A probability distribution (mean and standard deviation) was estimated for each of these design parameters. Then, error analysis was used to obtain a probability distribution for the system lifetime (mean = 7.7 years, standard deviation = 1.1 years). From this, the probability that the system will achieve the design goal of 5 years lifetime is 0.993. This value represents an estimate of the degradation reliability of the system.

  10. Probabilistic structural analysis of a truss typical for space station

    NASA Technical Reports Server (NTRS)

    Pai, Shantaram S.

    1990-01-01

    A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.

  11. Performance and limitations of administrative data in the identification of AKI.

    PubMed

    Grams, Morgan E; Waikar, Sushrut S; MacMahon, Blaithin; Whelton, Seamus; Ballew, Shoshana H; Coresh, Josef

    2014-04-01

    Billing codes are frequently used to identify AKI events in epidemiologic research. The goals of this study were to validate billing code-identified AKI against the current AKI consensus definition and to ascertain whether sensitivity and specificity vary by patient characteristic or over time. The study population included 10,056 Atherosclerosis Risk in Communities study participants hospitalized between 1996 and 2008. Billing code-identified AKI was compared with the 2012 Kidney Disease Improving Global Outcomes (KDIGO) creatinine-based criteria (AKIcr) and an approximation of the 2012 KDIGO creatinine- and urine output-based criteria (AKIcr_uop) in a subset with available outpatient data. Sensitivity and specificity of billing code-identified AKI were evaluated over time and according to patient age, race, sex, diabetes status, and CKD status in 546 charts selected for review, with estimates adjusted for sampling technique. A total of 34,179 hospitalizations were identified; 1353 had a billing code for AKI. The sensitivity of billing code-identified AKI was 17.2% (95% confidence interval [95% CI], 13.2% to 21.2%) compared with AKIcr (n=1970 hospitalizations) and 11.7% (95% CI, 8.8% to 14.5%) compared with AKIcr_uop (n=1839 hospitalizations). Specificity was >98% in both cases. Sensitivity was significantly higher in the more recent time period (2002-2008) and among participants aged 65 years and older. Billing code-identified AKI captured a more severe spectrum of disease than did AKIcr and AKIcr_uop, with a larger proportion of patients with stage 3 AKI (34.9%, 19.7%, and 11.5%, respectively) and higher in-hospital mortality (41.2%, 18.7%, and 12.8%, respectively). The use of billing codes to identify AKI has low sensitivity compared with the current KDIGO consensus definition, especially when the urine output criterion is included, and results in the identification of a more severe phenotype. Epidemiologic studies using billing codes may benefit from a high specificity, but the variation in sensitivity may result in bias, particularly when trends over time are the outcome of interest.

  12. Analysis Of FEL Optical Systems With Grazing Incidence Mirrors

    NASA Astrophysics Data System (ADS)

    Knapp, C. E.; Viswanathan, V. K.; Bender, S. C.; Appert, Q. D.; Lawrence, G.; Barnard, C.

    1986-11-01

    The use of grazing incidence optics in resonators alleviates the problem of damage to the optical elements and permits higher powers in cavities of reasonable dimensions for a free electron laser (FEL). The design and manufacture of a grazing incidence beam expander for the Los Alamos FEL mock up has been completed. In this paper, we describe the analysis of a bare cavity, grazing incidence optical beam expander for an FEL system. Since the existing geometrical and physical optics codes were inadequate for such an analysis, the GLAD code was modified to include global coordinates, exact conic representation, raytracing, and exact aberration features to determine the alignment sensitivities of laser resonators. A resonator cavity has been manufactured and experimentally setup in the Optical Evaluation Laboratory at Los Alamos. Calculated performance is compared with the laboratory measurements obtained so far.

  13. Rotordynamics on the PC: Further Capabilities of ARDS

    NASA Technical Reports Server (NTRS)

    Fleming, David P.

    1997-01-01

    Rotordynamics codes for personal computers are now becoming available. One of the most capable codes is Analysis of RotorDynamic Systems (ARDS) which uses the component mode synthesis method to analyze a system of up to 5 rotating shafts. ARDS was originally written for a mainframe computer but has been successfully ported to a PC; its basic capabilities for steady-state and transient analysis were reported in an earlier paper. Additional functions have now been added to the PC version of ARDS. These functions include: 1) Estimation of the peak response following blade loss without resorting to a full transient analysis; 2) Calculation of response sensitivity to input parameters; 3) Formulation of optimum rotor and damper designs to place critical speeds in desirable ranges or minimize bearing loads; 4) Production of Poincard plots so the presence of chaotic motion can be ascertained. ARDS produces printed and plotted output. The executable code uses the full array sizes of the mainframe version and fits on a high density floppy disc. Examples of all program capabilities are presented and discussed.

  14. On Parametric Sensitivity of Reynolds-Averaged Navier-Stokes SST Turbulence Model: 2D Hypersonic Shock-Wave Boundary Layer Interactions

    NASA Technical Reports Server (NTRS)

    Brown, James L.

    2014-01-01

    Examined is sensitivity of separation extent, wall pressure and heating to variation of primary input flow parameters, such as Mach and Reynolds numbers and shock strength, for 2D and Axisymmetric Hypersonic Shock Wave Turbulent Boundary Layer interactions obtained by Navier-Stokes methods using the SST turbulence model. Baseline parametric sensitivity response is provided in part by comparison with vetted experiments, and in part through updated correlations based on free interaction theory concepts. A recent database compilation of hypersonic 2D shock-wave/turbulent boundary layer experiments extensively used in a prior related uncertainty analysis provides the foundation for this updated correlation approach, as well as for more conventional validation. The primary CFD method for this work is DPLR, one of NASA's real-gas aerothermodynamic production RANS codes. Comparisons are also made with CFL3D, one of NASA's mature perfect-gas RANS codes. Deficiencies in predicted separation response of RANS/SST solutions to parametric variations of test conditions are summarized, along with recommendations as to future turbulence approach.

  15. A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.

    2004-01-01

    The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.

  16. Mixing-model Sensitivity to Initial Conditions in Hydrodynamic Predictions

    NASA Astrophysics Data System (ADS)

    Bigelow, Josiah; Silva, Humberto; Truman, C. Randall; Vorobieff, Peter

    2017-11-01

    Amagat and Dalton mixing-models were studied to compare their thermodynamic prediction of shock states. Numerical simulations with the Sandia National Laboratories shock hydrodynamic code CTH modeled University of New Mexico (UNM) shock tube laboratory experiments shocking a 1:1 molar mixture of helium (He) and sulfur hexafluoride (SF6) . Five input parameters were varied for sensitivity analysis: driver section pressure, driver section density, test section pressure, test section density, and mixture ratio (mole fraction). We show via incremental Latin hypercube sampling (LHS) analysis that significant differences exist between Amagat and Dalton mixing-model predictions. The differences observed in predicted shock speeds, temperatures, and pressures grow more pronounced with higher shock speeds. Supported by NNSA Grant DE-0002913.

  17. Code manual for CONTAIN 2.0: A computer code for nuclear reactor containment analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murata, K.K.; Williams, D.C.; Griffith, R.O.

    1997-12-01

    The CONTAIN 2.0 computer code is an integrated analysis tool used for predicting the physical conditions, chemical compositions, and distributions of radiological materials inside a containment building following the release of material from the primary system in a light-water reactor accident. It can also predict the source term to the environment. CONTAIN 2.0 is intended to replace the earlier CONTAIN 1.12, which was released in 1991. The purpose of this Code Manual is to provide full documentation of the features and models in CONTAIN 2.0. Besides complete descriptions of the models, this Code Manual provides a complete description of themore » input and output from the code. CONTAIN 2.0 is a highly flexible and modular code that can run problems that are either quite simple or highly complex. An important aspect of CONTAIN is that the interactions among thermal-hydraulic phenomena, aerosol behavior, and fission product behavior are taken into account. The code includes atmospheric models for steam/air thermodynamics, intercell flows, condensation/evaporation on structures and aerosols, aerosol behavior, and gas combustion. It also includes models for reactor cavity phenomena such as core-concrete interactions and coolant pool boiling. Heat conduction in structures, fission product decay and transport, radioactive decay heating, and the thermal-hydraulic and fission product decontamination effects of engineered safety features are also modeled. To the extent possible, the best available models for severe accident phenomena have been incorporated into CONTAIN, but it is intrinsic to the nature of accident analysis that significant uncertainty exists regarding numerous phenomena. In those cases, sensitivity studies can be performed with CONTAIN by means of user-specified input parameters. Thus, the code can be viewed as a tool designed to assist the knowledge reactor safety analyst in evaluating the consequences of specific modeling assumptions.« less

  18. Shell stability analysis in a computer aided engineering (CAE) environment

    NASA Technical Reports Server (NTRS)

    Arbocz, J.; Hol, J. M. A. M.

    1993-01-01

    The development of 'DISDECO', the Delft Interactive Shell DEsign COde is described. The purpose of this project is to make the accumulated theoretical, numerical and practical knowledge of the last 25 years or so readily accessible to users interested in the analysis of buckling sensitive structures. With this open ended, hierarchical, interactive computer code the user can access from his workstation successively programs of increasing complexity. The computational modules currently operational in DISDECO provide the prospective user with facilities to calculate the critical buckling loads of stiffened anisotropic shells under combined loading, to investigate the effects the various types of boundary conditions will have on the critical load, and to get a complete picture of the degrading effects the different shapes of possible initial imperfections might cause, all in one interactive session. Once a design is finalized, its collapse load can be verified by running a large refined model remotely from behind the workstation with one of the current generation 2-dimensional codes, with advanced capabilities to handle both geometric and material nonlinearities.

  19. Under-coding of secondary conditions in coded hospital health data: Impact of co-existing conditions, death status and number of codes in a record.

    PubMed

    Peng, Mingkai; Southern, Danielle A; Williamson, Tyler; Quan, Hude

    2017-12-01

    This study examined the coding validity of hypertension, diabetes, obesity and depression related to the presence of their co-existing conditions, death status and the number of diagnosis codes in hospital discharge abstract database. We randomly selected 4007 discharge abstract database records from four teaching hospitals in Alberta, Canada and reviewed their charts to extract 31 conditions listed in Charlson and Elixhauser comorbidity indices. Conditions associated with the four study conditions were identified through multivariable logistic regression. Coding validity (i.e. sensitivity, positive predictive value) of the four conditions was related to the presence of their associated conditions. Sensitivity increased with increasing number of diagnosis code. Impact of death on coding validity is minimal. Coding validity of conditions is closely related to its clinical importance and complexity of patients' case mix. We recommend mandatory coding of certain secondary diagnosis to meet the need of health research based on administrative health data.

  20. CASL L1 Milestone report : CASL.P4.01, sensitivity and uncertainty analysis for CIPS with VIPRE-W and BOA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sung, Yixing; Adams, Brian M.; Secker, Jeffrey R.

    2011-12-01

    The CASL Level 1 Milestone CASL.P4.01, successfully completed in December 2011, aimed to 'conduct, using methodologies integrated into VERA, a detailed sensitivity analysis and uncertainty quantification of a crud-relevant problem with baseline VERA capabilities (ANC/VIPRE-W/BOA).' The VUQ focus area led this effort, in partnership with AMA, and with support from VRI. DAKOTA was coupled to existing VIPRE-W thermal-hydraulics and BOA crud/boron deposit simulations representing a pressurized water reactor (PWR) that previously experienced crud-induced power shift (CIPS). This work supports understanding of CIPS by exploring the sensitivity and uncertainty in BOA outputs with respect to uncertain operating and model parameters. Thismore » report summarizes work coupling the software tools, characterizing uncertainties, and analyzing the results of iterative sensitivity and uncertainty studies. These studies focused on sensitivity and uncertainty of CIPS indicators calculated by the current version of the BOA code used in the industry. Challenges with this kind of analysis are identified to inform follow-on research goals and VERA development targeting crud-related challenge problems.« less

  1. Modal Test/Analysis Correlation of Space Station Structures Using Nonlinear Sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlation. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  2. Modal test/analysis correlation of Space Station structures using nonlinear sensitivity

    NASA Technical Reports Server (NTRS)

    Gupta, Viney K.; Newell, James F.; Berke, Laszlo; Armand, Sasan

    1992-01-01

    The modal correlation problem is formulated as a constrained optimization problem for validation of finite element models (FEM's). For large-scale structural applications, a pragmatic procedure for substructuring, model verification, and system integration is described to achieve effective modal correlations. The space station substructure FEM's are reduced using Lanczos vectors and integrated into a system FEM using Craig-Bampton component modal synthesis. The optimization code is interfaced with MSC/NASTRAN to solve the problem of modal test/analysis correlation; that is, the problem of validating FEM's for launch and on-orbit coupled loads analysis against experimentally observed frequencies and mode shapes. An iterative perturbation algorithm is derived and implemented to update nonlinear sensitivity (derivatives of eigenvalues and eigenvectors) during optimizer iterations, which reduced the number of finite element analyses.

  3. The application of coded excitation technology in medical ultrasonic Doppler imaging

    NASA Astrophysics Data System (ADS)

    Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin

    2008-03-01

    Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.

  4. Use of the International Classification of Diseases, 9th revision, coding in identifying chronic hepatitis B virus infection in health system data: implications for national surveillance.

    PubMed

    Mahajan, Reena; Moorman, Anne C; Liu, Stephen J; Rupp, Loralee; Klevens, R Monina

    2013-05-01

    With increasing use electronic health records (EHR) in the USA, we looked at the predictive values of the International Classification of Diseases, 9th revision (ICD-9) coding system for surveillance of chronic hepatitis B virus (HBV) infection. The chronic HBV cohort from the Chronic Hepatitis Cohort Study was created based on electronic health records (EHR) of adult patients who accessed services from 2006 to 2008 from four healthcare systems in the USA. Using the gold standard of abstractor review to confirm HBV cases, we calculated the sensitivity, specificity, positive and negative predictive values using one qualifying ICD-9 code versus using two qualifying ICD-9 codes separated by 6 months or greater. Of 1 652 055 adult patients, 2202 (0.1%) were confirmed as having chronic HBV. Use of one ICD-9 code had a sensitivity of 83.9%, positive predictive value of 61.0%, and specificity and negative predictive values greater than 99%. Use of two hepatitis B-specific ICD-9 codes resulted in a sensitivity of 58.4% and a positive predictive value of 89.9%. Use of one or two hepatitis B ICD-9 codes can identify cases with chronic HBV infection with varying sensitivity and positive predictive values. As the USA increases the use of EHR, surveillance using ICD-9 codes may be reliable to determine the burden of chronic HBV infection and would be useful to improve reporting by state and local health departments.

  5. Differential DNA methylation profiles of coding and non-coding genes define hippocampal sclerosis in human temporal lobe epilepsy

    PubMed Central

    Miller-Delaney, Suzanne F.C.; Bryan, Kenneth; Das, Sudipto; McKiernan, Ross C.; Bray, Isabella M.; Reynolds, James P.; Gwinn, Ryder; Stallings, Raymond L.

    2015-01-01

    Temporal lobe epilepsy is associated with large-scale, wide-ranging changes in gene expression in the hippocampus. Epigenetic changes to DNA are attractive mechanisms to explain the sustained hyperexcitability of chronic epilepsy. Here, through methylation analysis of all annotated C-phosphate-G islands and promoter regions in the human genome, we report a pilot study of the methylation profiles of temporal lobe epilepsy with or without hippocampal sclerosis. Furthermore, by comparative analysis of expression and promoter methylation, we identify methylation sensitive non-coding RNA in human temporal lobe epilepsy. A total of 146 protein-coding genes exhibited altered DNA methylation in temporal lobe epilepsy hippocampus (n = 9) when compared to control (n = 5), with 81.5% of the promoters of these genes displaying hypermethylation. Unique methylation profiles were evident in temporal lobe epilepsy with or without hippocampal sclerosis, in addition to a common methylation profile regardless of pathology grade. Gene ontology terms associated with development, neuron remodelling and neuron maturation were over-represented in the methylation profile of Watson Grade 1 samples (mild hippocampal sclerosis). In addition to genes associated with neuronal, neurotransmitter/synaptic transmission and cell death functions, differential hypermethylation of genes associated with transcriptional regulation was evident in temporal lobe epilepsy, but overall few genes previously associated with epilepsy were among the differentially methylated. Finally, a panel of 13, methylation-sensitive microRNA were identified in temporal lobe epilepsy including MIR27A, miR-193a-5p (MIR193A) and miR-876-3p (MIR876), and the differential methylation of long non-coding RNA documented for the first time. The present study therefore reports select, genome-wide DNA methylation changes in human temporal lobe epilepsy that may contribute to the molecular architecture of the epileptic brain. PMID:25552301

  6. Advanced imaging techniques in brain tumors

    PubMed Central

    2009-01-01

    Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287

  7. 75 FR 26668 - Flutriafol; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-12

    ... skin sensitizer when tested in guinea pigs. The pattern of toxicity attributed to flutriafol exposure... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311...

  8. [Gastos hospitalarios por neumonía neumocóccica invasora en adultos en un hospital general en Chile].

    PubMed

    Alarcón, Álvaro; Lagos, Isabel; Fica, Alberto

    2016-08-01

    Pneumococcal infections are important for their morbidity and economic burden, but there is no economical data from adults patients in Chile. Estimate direct medical costs of bacteremic pneumococcal pneumonia among adult patients hospitalized in a general hospital and to evaluate the sensitivity of ICD 10 discharge codes to capture infections from this pathogen. Analysis of hospital charges by components in a group of patients admitted for bacteremic pneumococcal pneumonia, correction of values by inflation and conversion from CLP to US$. Data were collected from 59 patients admitted during 2005-2010, mean age 71.9 years. Average hospital charges for those managed in general wards reached 2,756 US$, 8,978 US$ for those managed in critical care units (CCU) and 6,025 for the whole group. Charges were higher in CCU (p < 0.001), and patients managed in these units generated 78.3% of the whole cost (n = 31; 52.5% from total). The median cost in general wards was 1,558 US$, and 3,993 in CCU. Main components were bed occupancy (37.8% of charges), and medications (27.4%). There were no differences associated to age, comorbidities, severity scores or mortality. No single ICD discharge code involved a S. pneumoniae bacteremic case (0% sensitivity) and only 2 cases were coded as pneumococcal pneumonia (3.4%). Mean hospital charges (~6,000 US dollars) or median values (~2,400 US dollars) were high, underlying the economic impact of this condition. Costs were higher among patients managed in CCU. Recognition of bacteremic pneumococcal infections by ICD 10 discharge codes has a very low sensitivity.

  9. Tutorial: Parallel Computing of Simulation Models for Risk Analysis.

    PubMed

    Reilly, Allison C; Staid, Andrea; Gao, Michael; Guikema, Seth D

    2016-10-01

    Simulation models are widely used in risk analysis to study the effects of uncertainties on outcomes of interest in complex problems. Often, these models are computationally complex and time consuming to run. This latter point may be at odds with time-sensitive evaluations or may limit the number of parameters that are considered. In this article, we give an introductory tutorial focused on parallelizing simulation code to better leverage modern computing hardware, enabling risk analysts to better utilize simulation-based methods for quantifying uncertainty in practice. This article is aimed primarily at risk analysts who use simulation methods but do not yet utilize parallelization to decrease the computational burden of these models. The discussion is focused on conceptual aspects of embarrassingly parallel computer code and software considerations. Two complementary examples are shown using the languages MATLAB and R. A brief discussion of hardware considerations is located in the Appendix. © 2016 Society for Risk Analysis.

  10. Improved Correction of Misclassification Bias With Bootstrap Imputation.

    PubMed

    van Walraven, Carl

    2018-07-01

    Diagnostic codes used in administrative database research can create bias due to misclassification. Quantitative bias analysis (QBA) can correct for this bias, requires only code sensitivity and specificity, but may return invalid results. Bootstrap imputation (BI) can also address misclassification bias but traditionally requires multivariate models to accurately estimate disease probability. This study compared misclassification bias correction using QBA and BI. Serum creatinine measures were used to determine severe renal failure status in 100,000 hospitalized patients. Prevalence of severe renal failure in 86 patient strata and its association with 43 covariates was determined and compared with results in which renal failure status was determined using diagnostic codes (sensitivity 71.3%, specificity 96.2%). Differences in results (misclassification bias) were then corrected with QBA or BI (using progressively more complex methods to estimate disease probability). In total, 7.4% of patients had severe renal failure. Imputing disease status with diagnostic codes exaggerated prevalence estimates [median relative change (range), 16.6% (0.8%-74.5%)] and its association with covariates [median (range) exponentiated absolute parameter estimate difference, 1.16 (1.01-2.04)]. QBA produced invalid results 9.3% of the time and increased bias in estimates of both disease prevalence and covariate associations. BI decreased misclassification bias with increasingly accurate disease probability estimates. QBA can produce invalid results and increase misclassification bias. BI avoids invalid results and can importantly decrease misclassification bias when accurate disease probability estimates are used.

  11. The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

    DOE PAGES

    Tsushima, Yoko; Brient, Florent; Klein, Stephen A.; ...

    2017-11-27

    The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less

  12. The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsushima, Yoko; Brient, Florent; Klein, Stephen A.

    The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less

  13. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  14. 75 FR 74634 - Spiroxamine; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-01

    ... is a skin sensitizer when tested in guinea pigs and is a severe dermal irritant. Spiroxamine... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311...

  15. Sensitive determination of thiols in wine samples by a stable isotope-coded derivatization reagent d0/d4-acridone-10-ethyl-N-maleimide coupled with high-performance liquid chromatography-electrospray ionization-tandem mass spectrometry analysis.

    PubMed

    Lv, Zhengxian; You, Jinmao; Lu, Shuaimin; Sun, Weidi; Ji, Zhongyin; Sun, Zhiwei; Song, Cuihua; Chen, Guang; Li, Guoliang; Hu, Na; Zhou, Wu; Suo, Yourui

    2017-03-31

    As the key aroma compounds, varietal thiols are the crucial odorants responsible for the flavor of wines. Quantitative analysis of thiols can provide crucial information for the aroma profiles of different wine styles. In this study, a rapid and sensitive method for the simultaneous determination of six thiols in wine using d 0 /d 4 -acridone-10-ethyl-N-maleimide (d 0 /d 4 -AENM) as stable isotope-coded derivatization reagent (SICD) by high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) has been developed. Quantification of thiols was performed by using d 4 -AENM labeled thiols as the internal standards (IS), followed by stable isotope dilution HPLC-ESI-MS/MS analysis. The AENM derivatization combined with multiple reactions monitoring (MRM) not only allowed trace analysis of thiols due to the extremely high sensitivity, but also efficiently corrected the matrix effects during HPLC-MS/MS and the fluctuation in MS/MS signal intensity due to instrument. The obtained internal standard calibration curves for six thiols were linear over the range of 25-10,000pmol/L (R 2 ≥0.9961). Detection limits (LODs) for most of analytes were below 6.3pmol/L. The proposed method was successfully applied for the simultaneous determination of six kinds of thiols in wine samples with precisions ≤3.5% and recoveries ≥78.1%. In conclusion, the developed method is expected to be a promising tool for detection of trace thiols in wine and also in other complex matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Star Cluster Properties in Two LEGUS Galaxies Computed with Stochastic Stellar Population Synthesis Models

    NASA Astrophysics Data System (ADS)

    Krumholz, Mark R.; Adamo, Angela; Fumagalli, Michele; Wofford, Aida; Calzetti, Daniela; Lee, Janice C.; Whitmore, Bradley C.; Bright, Stacey N.; Grasha, Kathryn; Gouliermis, Dimitrios A.; Kim, Hwihyun; Nair, Preethi; Ryon, Jenna E.; Smith, Linda J.; Thilker, David; Ubeda, Leonardo; Zackrisson, Erik

    2015-10-01

    We investigate a novel Bayesian analysis method, based on the Stochastically Lighting Up Galaxies (slug) code, to derive the masses, ages, and extinctions of star clusters from integrated light photometry. Unlike many analysis methods, slug correctly accounts for incomplete initial mass function (IMF) sampling, and returns full posterior probability distributions rather than simply probability maxima. We apply our technique to 621 visually confirmed clusters in two nearby galaxies, NGC 628 and NGC 7793, that are part of the Legacy Extragalactic UV Survey (LEGUS). LEGUS provides Hubble Space Telescope photometry in the NUV, U, B, V, and I bands. We analyze the sensitivity of the derived cluster properties to choices of prior probability distribution, evolutionary tracks, IMF, metallicity, treatment of nebular emission, and extinction curve. We find that slug's results for individual clusters are insensitive to most of these choices, but that the posterior probability distributions we derive are often quite broad, and sometimes multi-peaked and quite sensitive to the choice of priors. In contrast, the properties of the cluster population as a whole are relatively robust against all of these choices. We also compare our results from slug to those derived with a conventional non-stochastic fitting code, Yggdrasil. We show that slug's stochastic models are generally a better fit to the observations than the deterministic ones used by Yggdrasil. However, the overall properties of the cluster populations recovered by both codes are qualitatively similar.

  17. Validation of Living Donor Nephrectomy Codes

    PubMed Central

    Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.

    2018-01-01

    Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679

  18. Multisensory Control of Stabilization Reflexes

    DTIC Science & Technology

    2012-08-22

    Dr Simon Schultz (Neural Coding), Dr Manos Drakakis (Low-power VLSI technology), and Dr Reiko Tanaka (Compound Control). To study the functional...Krapp H.G., and Schultz S.R.: Spike-triggered independent component analysis: application to a fly motion-sensitive neuron. Visual Neuroscience, 8...Tanaka, RI.: Characterization of insect gaze control systems. 18th World Congress of International Federation of Automated Control (IFAC), Milan

  19. Post-Test Analysis of 11% Break at PSB-VVER Experimental Facility using Cathare 2 Code

    NASA Astrophysics Data System (ADS)

    Sabotinov, Luben; Chevrier, Patrick

    The best estimate French thermal-hydraulic computer code CATHARE 2 Version 2.5_1 was used for post-test analysis of the experiment “11% upper plenum break”, conducted at the large-scale test facility PSB-VVER in Russia. The PSB rig is 1:300 scaled model of VVER-1000 NPP. A computer model has been developed for CATHARE 2 V2.5_1, taking into account all important components of the PSB facility: reactor model (lower plenum, core, bypass, upper plenum, downcomer), 4 separated loops, pressurizer, horizontal multitube steam generators, break section. The secondary side is represented by recirculation model. A large number of sensitivity calculations has been performed regarding break modeling, reactor pressure vessel modeling, counter current flow modeling, hydraulic losses, heat losses. The comparison between calculated and experimental results shows good prediction of the basic thermal-hydraulic phenomena and parameters such as pressures, temperatures, void fractions, loop seal clearance, etc. The experimental and calculation results are very sensitive regarding the fuel cladding temperature, which show a periodical nature. With the applied CATHARE 1D modeling, the global thermal-hydraulic parameters and the core heat up have been reasonably predicted.

  20. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  1. Theoretical approaches to maternal-infant interaction: which approach best discriminates between mothers with and without postpartum depression?

    PubMed

    Logsdon, M Cynthia; Mittelberg, Meghan; Morrison, David; Robertson, Ashley; Luther, James F; Wisniewski, Stephen R; Confer, Andrea; Eng, Heather; Sit, Dorothy K Y; Wisner, Katherine L

    2014-12-01

    The purpose of this study was to determine which of the four common approaches to coding maternal-infant interaction best discriminates between mothers with and without postpartum depression. After extensive training, four research assistants coded 83 three minute videotapes of maternal infant interaction at 12month postpartum visits. Four theoretical approaches to coding (Maternal Behavior Q-Sort, the Dyadic Mini Code, Ainsworth Maternal Sensitivity Scale, and the Child-Caregiver Mutual Regulation Scale) were used. Twelve month data were chosen to allow the maximum possible exposure of the infant to maternal depression during the first postpartum year. The videotapes were created in a laboratory with standard procedures. Inter-rater reliabilities for each coding method ranged from .7 to .9. The coders were blind to depression status of the mother. Twenty-seven of the women had major depressive disorder during the 12month postpartum period. Receiver operating characteristics analysis indicated that none of the four methods of analyzing maternal infant interaction discriminated between mothers with and without major depressive disorder. Limitations of the study include the cross-sectional design and the low number of women with major depressive disorder. Further analysis should include data from videotapes at earlier postpartum time periods, and alternative coding approaches should be considered. Nurses should continue to examine culturally appropriate ways in which new mothers can be supported in how to best nurture their babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barani, T.; Bruschi, E.; Pizzocri, D.

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less

  4. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D. T. Clark; M. J. Russell; R. E. Spears

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less

  5. Potential Effects of a Scenario Earthquake on the Economy of Southern California: Labor Market Exposure and Sensitivity Analysis to a Magnitude 7.8 Earthquake

    USGS Publications Warehouse

    Sherrouse, Benson C.; Hester, David J.; Wein, Anne M.

    2008-01-01

    The Multi-Hazards Demonstration Project (MHDP) is a collaboration between the U.S. Geological Survey (USGS) and various partners from the public and private sectors and academia, meant to improve Southern California's resiliency to natural hazards (Jones and others, 2007). In support of the MHDP objectives, the ShakeOut Scenario was developed. It describes a magnitude 7.8 (M7.8) earthquake along the southernmost 300 kilometers (200 miles) of the San Andreas Fault, identified by geoscientists as a plausible event that will cause moderate to strong shaking over much of the eight-county (Imperial, Kern, Los Angeles, Orange, Riverside, San Bernardino, San Diego, and Ventura) Southern California region. This report contains an exposure and sensitivity analysis of economic Super Sectors in terms of labor and employment statistics. Exposure is measured as the absolute counts of labor market variables anticipated to experience each level of Instrumental Intensity (a proxy measure of damage). Sensitivity is the percentage of the exposure of each Super Sector to each Instrumental Intensity level. The analysis concerns the direct effect of the scenario earthquake on economic sectors and provides a baseline for the indirect and interactive analysis of an input-output model of the regional economy. The analysis is inspired by the Bureau of Labor Statistics (BLS) report that analyzed the labor market losses (exposure) of a M6.9 earthquake on the Hayward fault by overlaying geocoded labor market data on Instrumental Intensity values. The method used here is influenced by the ZIP-code-level data provided by the California Employment Development Department (CA EDD), which requires the assignment of Instrumental Intensities to ZIP codes. The ZIP-code-level labor market data includes the number of business establishments, employees, and quarterly payroll categorized by the North American Industry Classification System. According to the analysis results, nearly 225,000 business establishments, or 44 percent of all establishments, would experience Instrumental Intensities between VII (7) and X (10). This represents more than 4 million employees earning over $45 billion in quarterly payroll. Over 57,000 of these establishments, employing over 1 million employees earning over $10 billion in quarterly payroll, would experience Instrumental Intensities of IX (9) or X (10). Based upon absolute counts and percentages, the Trade, Transportation, and Utilities Super Sector and the Manufacturing Super Sector are estimated to have the greatest exposure and sensitivity respectively. The Information and the Natural Resources and Mining Super Sectors are estimated to be the least impacted. Areas estimated to experience an Instrumental Intensity of X (10) account for approximately 3 percent of the region's labor market.

  6. Use of Systematic Methods to Improve Disease Identification in Administrative Data: The Case of Severe Sepsis.

    PubMed

    Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher

    2017-03-01

    Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.

  7. Development of a generalized perturbation theory method for sensitivity analysis using continuous-energy Monte Carlo methods

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.

    2016-03-01

    The sensitivity and uncertainty analysis tools of the ORNL SCALE nuclear modeling and simulation code system that have been developed over the last decade have proven indispensable for numerous application and design studies for nuclear criticality safety and reactor physics. SCALE contains tools for analyzing the uncertainty in the eigenvalue of critical systems, but cannot quantify uncertainty in important neutronic parameters such as multigroup cross sections, fuel fission rates, activation rates, and neutron fluence rates with realistic three-dimensional Monte Carlo simulations. A more complete understanding of the sources of uncertainty in these design-limiting parameters could lead to improvements in processmore » optimization, reactor safety, and help inform regulators when setting operational safety margins. A novel approach for calculating eigenvalue sensitivity coefficients, known as the CLUTCH method, was recently explored as academic research and has been found to accurately and rapidly calculate sensitivity coefficients in criticality safety applications. The work presented here describes a new method, known as the GEAR-MC method, which extends the CLUTCH theory for calculating eigenvalue sensitivity coefficients to enable sensitivity coefficient calculations and uncertainty analysis for a generalized set of neutronic responses using high-fidelity continuous-energy Monte Carlo calculations. Here, several criticality safety systems were examined to demonstrate proof of principle for the GEAR-MC method, and GEAR-MC was seen to produce response sensitivity coefficients that agreed well with reference direct perturbation sensitivity coefficients.« less

  8. Structure-related statistical singularities along protein sequences: a correlation study.

    PubMed

    Colafranceschi, Mauro; Colosimo, Alfredo; Zbilut, Joseph P; Uversky, Vladimir N; Giuliani, Alessandro

    2005-01-01

    A data set composed of 1141 proteins representative of all eukaryotic protein sequences in the Swiss-Prot Protein Knowledge base was coded by seven physicochemical properties of amino acid residues. The resulting numerical profiles were submitted to correlation analysis after the application of a linear (simple mean) and a nonlinear (Recurrence Quantification Analysis, RQA) filter. The main RQA variables, Recurrence and Determinism, were subsequently analyzed by Principal Component Analysis. The RQA descriptors showed that (i) within protein sequences is embedded specific information neither present in the codes nor in the amino acid composition and (ii) the most sensitive code for detecting ordered recurrent (deterministic) patterns of residues in protein sequences is the Miyazawa-Jernigan hydrophobicity scale. The most deterministic proteins in terms of autocorrelation properties of primary structures were found (i) to be involved in protein-protein and protein-DNA interactions and (ii) to display a significantly higher proportion of structural disorder with respect to the average data set. A study of the scaling behavior of the average determinism with the setting parameters of RQA (embedding dimension and radius) allows for the identification of patterns of minimal length (six residues) as possible markers of zones specifically prone to inter- and intramolecular interactions.

  9. [MODIS Investigation

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1996-01-01

    The objectives of the last six months were: (1) Complete sensitivity analysis of fluorescence; line height algorithms (2) Deliver fluorescence algorithm code and test data to the University of Miami for integration; (3) Complete analysis of bio-optical data from Southern Ocean cruise; (4) Conduct laboratory experiments based on analyses of field data; (5) Analyze data from bio-optical mooring off Hawaii; (6) Develop calibration/validation plan for MODIS fluorescence data; (7) Respond to the Japanese Research Announcement for GLI; and (8) Continue to review plans for EOSDIS and assist ECS contractor.

  10. Radiological performance assessment for the E-Area Vaults Disposal Facility. Appendices A through M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.R.

    1994-04-15

    These document contains appendices A-M for the performance assessment. They are A: details of models and assumptions, B: computer codes, C: data tabulation, D: geochemical interactions, E: hydrogeology of the Savannah River Site, F: software QA plans, G: completeness review guide, H: performance assessment peer review panel recommendations, I: suspect soil performance analysis, J: sensitivity/uncertainty analysis, K: vault degradation study, L: description of naval reactor waste disposal, M: porflow input file. (GHH)

  11. Sensitivity curves for searches for gravitational-wave backgrounds

    NASA Astrophysics Data System (ADS)

    Thrane, Eric; Romano, Joseph D.

    2013-12-01

    We propose a graphical representation of detector sensitivity curves for stochastic gravitational-wave backgrounds that takes into account the increase in sensitivity that comes from integrating over frequency in addition to integrating over time. This method is valid for backgrounds that have a power-law spectrum in the analysis band. We call these graphs “power-law integrated curves.” For simplicity, we consider cross-correlation searches for unpolarized and isotropic stochastic backgrounds using two or more detectors. We apply our method to construct power-law integrated sensitivity curves for second-generation ground-based detectors such as Advanced LIGO, space-based detectors such as LISA and the Big Bang Observer, and timing residuals from a pulsar timing array. The code used to produce these plots is available at https://dcc.ligo.org/LIGO-P1300115/public for researchers interested in constructing similar sensitivity curves.

  12. Perceiving Group Behavior: Sensitive Ensemble Coding Mechanisms for Biological Motion of Human Crowds

    ERIC Educational Resources Information Center

    Sweeny, Timothy D.; Haroz, Steve; Whitney, David

    2013-01-01

    Many species, including humans, display group behavior. Thus, perceiving crowds may be important for social interaction and survival. Here, we provide the first evidence that humans use ensemble-coding mechanisms to perceive the behavior of a crowd of people with surprisingly high sensitivity. Observers estimated the headings of briefly presented…

  13. The Sensitivity of Coded Mask Telescopes

    NASA Technical Reports Server (NTRS)

    Skinner, Gerald K.

    2008-01-01

    Simple formulae are often used to estimate the sensitivity of coded mask X-ray or gamma-ray telescopes, but t,hese are strictly only applicable if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given which allows the calculation of the sensitivity. We consider certain aspects of the optimisation of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.

  14. Auditory brainstem response (ABR) profiling tests as diagnostic support for schizophrenia and adult attention-deficit hyperactivity disorder (ADHD).

    PubMed

    Juselius Baghdassarian, Eva; Nilsson Markhed, Maria; Lindström, Eva; Nilsson, Björn M; Lewander, Tommy

    2018-06-01

    To evaluate the performances of two auditory brainstem response (ABR) profiling tests as potential biomarkers and diagnostic support for schizophrenia and adult attention-deficit hyperactivity disorder (ADHD), respectively, in an investigator-initiated blinded study design. Male and female patients with schizophrenia (n=26) and adult ADHD (n=24) meeting Diagnostic and Statistical Manual of Mental Disorders Fourth Edition (DSM IV) diagnostic criteria and healthy controls (n=58) comprised the analysis set (n=108) of the total number of study participants (n=119). Coded sets of randomized ABR recordings were analysed by an independent party blinded to clinical diagnoses before a joint code-breaking session. The ABR profiling test for schizophrenia identified schizophrenia patients versus controls with a sensitivity of 84.6% and a specificity of 93.1%. The ADHD test identified patients with adult ADHD versus controls with a sensitivity of 87.5% and a specificity of 91.4%. The ABR profiling tests discriminated schizophrenia and ADHD versus healthy controls with high sensitivity and specificity. The methods deserve to be further explored in larger clinical studies including a broad range of psychiatric disorders to determine their utility as potential diagnostic biomarkers.

  15. Environmental performance of green building code and certification systems.

    PubMed

    Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua

    2014-01-01

    We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).

  16. X-ray spectral signatures of photoionized plasmas. [astrophysics

    NASA Technical Reports Server (NTRS)

    Liedahl, Duane A.; Kahn, Steven M.; Osterheld, Albert L.; Goldstein, William H.

    1990-01-01

    Plasma emission codes have become a standard tool for the analysis of spectroscopic data from cosmic X-ray sources. However, the assumption of collisional equilibrium, typically invoked in these codes, renders them inapplicable to many important astrophysical situations, particularly those involving X-ray photoionized nebulae. This point is illustrated by comparing model spectra which have been calculated under conditions appropriate to both coronal plasmas and X-ray photoionized plasmas. It is shown that the (3s-2p)/(3d-2p) line ratios in the Fe L-shell spectrum can be used to effectively discriminate between these two cases. This diagnostic will be especially useful for data analysis associated with AXAF and XMM, which will carry spectroscopic instrumentation with sufficient sensitivity and resolution to identify X-ray photoionized nebulae in a wide range of astrophysical environments.

  17. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  18. Validation of a next-generation sequencing assay for clinical molecular oncology.

    PubMed

    Cottrell, Catherine E; Al-Kateb, Hussam; Bredemeyer, Andrew J; Duncavage, Eric J; Spencer, David H; Abel, Haley J; Lockwood, Christina M; Hagemann, Ian S; O'Guin, Stephanie M; Burcea, Lauren C; Sawyer, Christopher S; Oschwald, Dayna M; Stratman, Jennifer L; Sher, Dorie A; Johnson, Mark R; Brown, Justin T; Cliften, Paul F; George, Bijoy; McIntosh, Leslie D; Shrivastava, Savita; Nguyen, Tudung T; Payton, Jacqueline E; Watson, Mark A; Crosby, Seth D; Head, Richard D; Mitra, Robi D; Nagarajan, Rakesh; Kulkarni, Shashikant; Seibert, Karen; Virgin, Herbert W; Milbrandt, Jeffrey; Pfeifer, John D

    2014-01-01

    Currently, oncology testing includes molecular studies and cytogenetic analysis to detect genetic aberrations of clinical significance. Next-generation sequencing (NGS) allows rapid analysis of multiple genes for clinically actionable somatic variants. The WUCaMP assay uses targeted capture for NGS analysis of 25 cancer-associated genes to detect mutations at actionable loci. We present clinical validation of the assay and a detailed framework for design and validation of similar clinical assays. Deep sequencing of 78 tumor specimens (≥ 1000× average unique coverage across the capture region) achieved high sensitivity for detecting somatic variants at low allele fraction (AF). Validation revealed sensitivities and specificities of 100% for detection of single-nucleotide variants (SNVs) within coding regions, compared with SNP array sequence data (95% CI = 83.4-100.0 for sensitivity and 94.2-100.0 for specificity) or whole-genome sequencing (95% CI = 89.1-100.0 for sensitivity and 99.9-100.0 for specificity) of HapMap samples. Sensitivity for detecting variants at an observed 10% AF was 100% (95% CI = 93.2-100.0) in HapMap mixes. Analysis of 15 masked specimens harboring clinically reported variants yielded concordant calls for 13/13 variants at AF of ≥ 15%. The WUCaMP assay is a robust and sensitive method to detect somatic variants of clinical significance in molecular oncology laboratories, with reduced time and cost of genetic analysis allowing for strategic patient management. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  19. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    DOE PAGES

    Barani, T.; Bruschi, E.; Pizzocri, D.; ...

    2017-01-03

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. Experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of burst release in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which ismore » applied as an extension of diffusion-based models to allow for the burst release effect. The concept and governing equations of the model are presented, and the effect of the newly introduced parameters is evaluated through an analytic sensitivity analysis. Then, the model is assessed for application to integral fuel rod analysis. The approach that we take for model assessment involves implementation in two structurally different fuel performance codes, namely, BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D semi-analytic code). The model is validated against 19 Light Water Reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the qualitative representation of the FGR kinetics and the quantitative predictions of integral fuel rod FGR, relative to the canonical, purely diffusion-based models, with both codes. The overall quantitative improvement of the FGR predictions in the two codes is comparable. Furthermore, calculated radial profiles of xenon concentration are investigated and compared to experimental data, demonstrating the representation of the underlying mechanisms of burst release by the new model.« less

  20. Accuracy of external cause-of-injury coding in VA polytrauma patient discharge records.

    PubMed

    Carlson, Kathleen F; Nugent, Sean M; Grill, Joseph; Sayer, Nina A

    2010-01-01

    Valid and efficient methods of identifying the etiology of treated injuries are critical for characterizing patient populations and developing prevention and rehabilitation strategies. We examined the accuracy of external cause-of-injury codes (E-codes) in Veterans Health Administration (VHA) administrative data for a population of injured patients. Chart notes and E-codes were extracted for 566 patients treated at any one of four VHA Polytrauma Rehabilitation Center sites between 2001 and 2006. Two expert coders, blinded to VHA E-codes, used chart notes to assign "gold standard" E-codes to injured patients. The accuracy of VHA E-coding was examined based on these gold standard E-codes. Only 382 of 517 (74%) injured patients were assigned E-codes in VHA records. Sensitivity of VHA E-codes varied significantly by site (range: 59%-91%, p < 0.001). Sensitivity was highest for combat-related injuries (81%) and lowest for fall-related injuries (60%). Overall specificity of E-codes was high (92%). E-coding accuracy was markedly higher when we restricted analyses to records that had been assigned VHA E-codes. E-codes may not be valid for ascertaining source-of-injury data for all injuries among VHA rehabilitation inpatients at this time. Enhanced training and policies may ensure more widespread, standardized use and accuracy of E-codes for injured veterans treated in the VHA.

  1. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  2. Reducing EnergyPlus Run Time For Code Compliance Tools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Athalye, Rahul A.; Gowri, Krishnan; Schultz, Robert W.

    2014-09-12

    Integration of the EnergyPlus ™ simulation engine into performance-based code compliance software raises a concern about simulation run time, which impacts timely feedback of compliance results to the user. EnergyPlus annual simulations for proposed and code baseline building models, and mechanical equipment sizing result in simulation run times beyond acceptable limits. This paper presents a study that compares the results of a shortened simulation time period using 4 weeks of hourly weather data (one per quarter), to an annual simulation using full 52 weeks of hourly weather data. Three representative building types based on DOE Prototype Building Models and threemore » climate zones were used for determining the validity of using a shortened simulation run period. Further sensitivity analysis and run time comparisons were made to evaluate the robustness and run time savings of using this approach. The results of this analysis show that the shortened simulation run period provides compliance index calculations within 1% of those predicted using annual simulation results, and typically saves about 75% of simulation run time.« less

  3. Postflight aerothermodynamic analysis of Pegasus(tm) using computational fluid dynamic techniques

    NASA Technical Reports Server (NTRS)

    Kuhn, Gary D.

    1992-01-01

    The objective was to validate the computational capability of the NASA Ames Navier-Stokes code, F3D, for flows at high Mach numbers using comparison flight test data from the Pegasus (tm) air launched, winged space booster. Comparisons were made with temperature and heat fluxes estimated from measurements on the wing surfaces and wing-fuselage fairings. Tests were conducted for solution convergence, sensitivity to grid density, and effects of distributing grid points to provide high density near temperature and heat flux sensors. The measured temperatures were from sensors embedded in the ablating thermal protection system. Surface heat fluxes were from plugs fabricated of highly insulative, nonablating material, and mounted level with the surface of the surrounding ablative material. As a preflight design tool, the F3D code produces accurate predictions of heat transfer and other aerodynamic properties, and it can provide detailed data for assessment of boundary layer separation, shock waves, and vortex formation. As a postflight analysis tool, the code provides a way to clarify and interpret the measured results.

  4. Knowledge management: Role of the the Radiation Safety Information Computational Center (RSICC)

    NASA Astrophysics Data System (ADS)

    Valentine, Timothy

    2017-09-01

    The Radiation Safety Information Computational Center (RSICC) at Oak Ridge National Laboratory (ORNL) is an information analysis center that collects, archives, evaluates, synthesizes and distributes information, data and codes that are used in various nuclear technology applications. RSICC retains more than 2,000 software packages that have been provided by code developers from various federal and international agencies. RSICC's customers (scientists, engineers, and students from around the world) obtain access to such computing codes (source and/or executable versions) and processed nuclear data files to promote on-going research, to ensure nuclear and radiological safety, and to advance nuclear technology. The role of such information analysis centers is critical for supporting and sustaining nuclear education and training programs both domestically and internationally, as the majority of RSICC's customers are students attending U.S. universities. Additionally, RSICC operates a secure CLOUD computing system to provide access to sensitive export-controlled modeling and simulation (M&S) tools that support both domestic and international activities. This presentation will provide a general review of RSICC's activities, services, and systems that support knowledge management and education and training in the nuclear field.

  5. Long Non-Coding RNA (lncRNA) Urothelial Carcinoma-Associated 1 (UCA1) Enhances Tamoxifen Resistance in Breast Cancer Cells via Inhibiting mTOR Signaling Pathway.

    PubMed

    Wu, Chihua; Luo, Jing

    2016-10-21

    BACKGROUND Long non-coding RNA (lncRNA) UCA1 is an oncogene in breast cancer. The purpose of this study was to investigate the role of UCA1 in tamoxifen resistance of estrogen receptor positive breast cancer cells. MATERIAL AND METHODS Tamoxifen sensitive MCF-7 cells were transfected for UCA1 overexpression, while tamoxifen resistant LCC2 and LCC9 cells were transfected with UCA siRNA for UCA1 knockdown. qRT-PCR was performed to analyze UCA1 expression. CCK-8 assay, immunofluorescence staining of cleaved caspase-9, and flow cytometric analysis of Annexin V/PI staining were used to assess tamoxifen sensitivity. Western blot analysis was performed to detect p-AKT and p-mTOR expression. RESULTS LncRNA UCA1 was significantly upregulated in tamoxifen resistant breast cancer cells compared to tamoxifen sensitive cells. LCC2 and LCC9 cells transfected with UCA1 siRNA had significantly higher ratio of apoptosis after tamoxifen treatment. UCA1 siRNA significantly decreased the protein levels of p-AKT and p-mTOR in LCC2 and LCC9 cells. Enforced UCA1 expression substantially reduced tamoxifen induced apoptosis in MCF-7 cells, while rapamycin treatment abrogated the protective effect of UCA1. CONCLUSIONS UCA1 upregulation was associated with tamoxifen resistance in breast cancer. Mechanistically, UCA1 confers tamoxifen resistance to breast cancer cells partly via activating the mTOR signaling pathway.

  6. IUS solid rocket motor contamination prediction methods

    NASA Technical Reports Server (NTRS)

    Mullen, C. R.; Kearnes, J. H.

    1980-01-01

    A series of computer codes were developed to predict solid rocket motor produced contamination to spacecraft sensitive surfaces. Subscale and flight test data have confirmed some of the analytical results. Application of the analysis tools to a typical spacecraft has provided early identification of potential spacecraft contamination problems and provided insight into their solution; e.g., flight plan modifications, plume or outgassing shields and/or contamination covers.

  7. Modularized seismic full waveform inversion based on waveform sensitivity kernels - The software package ASKI

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang; Lamara, Samir; Gutt, Phillip; Paffrath, Marcel

    2015-04-01

    We present a seismic full waveform inversion concept for applications ranging from seismological to enineering contexts, based on sensitivity kernels for full waveforms. The kernels are derived from Born scattering theory as the Fréchet derivatives of linearized frequency-domain full waveform data functionals, quantifying the influence of elastic earth model parameters and density on the data values. For a specific source-receiver combination, the kernel is computed from the displacement and strain field spectrum originating from the source evaluated throughout the inversion domain, as well as the Green function spectrum and its strains originating from the receiver. By storing the wavefield spectra of specific sources/receivers, they can be re-used for kernel computation for different specific source-receiver combinations, optimizing the total number of required forward simulations. In the iterative inversion procedure, the solution of the forward problem, the computation of sensitivity kernels and the derivation of a model update is held completely separate. In particular, the model description for the forward problem and the description of the inverted model update are kept independent. Hence, the resolution of the inverted model as well as the complexity of solving the forward problem can be iteratively increased (with increasing frequency content of the inverted data subset). This may regularize the overall inverse problem and optimizes the computational effort of both, solving the forward problem and computing the model update. The required interconnection of arbitrary unstructured volume and point grids is realized by generalized high-order integration rules and 3D-unstructured interpolation methods. The model update is inferred solving a minimization problem in a least-squares sense, resulting in Gauss-Newton convergence of the overall inversion process. The inversion method was implemented in the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion), which provides a generalized interface to arbitrary external forward modelling codes. So far, the 3D spectral-element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework are supported. The creation of interfaces to further forward codes is planned in the near future. ASKI is freely available under the terms of the GPL at www.rub.de/aski . Since the independent modules of ASKI must communicate via file output/input, large storage capacities need to be accessible conveniently. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion. In the presentation, we will show some aspects of the theory behind the full waveform inversion method and its practical realization by the software package ASKI, as well as synthetic and real-data applications from different scales and geometries.

  8. Accuracy of ICD-10 Coding System for Identifying Comorbidities and Infectious Conditions Using Data from a Thai University Hospital Administrative Database.

    PubMed

    Rattanaumpawan, Pinyo; Wongkamhla, Thanyarak; Thamlikitkul, Visanu

    2016-04-01

    To determine the accuracy of International Statistical Classification of Disease and Related Health Problems, 10th Revision (ICD-10) coding system in identifying comorbidities and infectious conditions using data from a Thai university hospital administrative database. A retrospective cross-sectional study was conducted among patients hospitalized in six general medicine wards at Siriraj Hospital. ICD-10 code data was identified and retrieved directly from the hospital administrative database. Patient comorbidities were captured using the ICD-10 coding algorithm for the Charlson comorbidity index. Infectious conditions were captured using the groups of ICD-10 diagnostic codes that were carefully prepared by two independent infectious disease specialists. Accuracy of ICD-10 codes combined with microbiological dataf or diagnosis of urinary tract infection (UTI) and bloodstream infection (BSI) was evaluated. Clinical data gathered from chart review was considered the gold standard in this study. Between February 1 and May 31, 2013, a chart review of 546 hospitalization records was conducted. The mean age of hospitalized patients was 62.8 ± 17.8 years and 65.9% of patients were female. Median length of stay [range] was 10.0 [1.0-353.0] days and hospital mortality was 21.8%. Conditions with ICD-10 codes that had good sensitivity (90% or higher) were diabetes mellitus and HIV infection. Conditions with ICD-10 codes that had good specificity (90% or higher) were cerebrovascular disease, chronic lung disease, diabetes mellitus, cancer HIV infection, and all infectious conditions. By combining ICD-10 codes with microbiological results, sensitivity increased from 49.5 to 66%for UTI and from 78.3 to 92.8%for BS. The ICD-10 coding algorithm is reliable only in some selected conditions, including underlying diabetes mellitus and HIV infection. Combining microbiological results with ICD-10 codes increased sensitivity of ICD-10 codes for identifying BSI. Future research is needed to improve the accuracy of hospital administrative coding system in Thailand.

  9. Application of CFX-10 to the Investigation of RPV Coolant Mixing in VVER Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moretti, Fabio; Melideo, Daniele; Terzuoli, Fulvio

    2006-07-01

    Coolant mixing phenomena occurring in the pressure vessel of a nuclear reactor constitute one of the main objectives of investigation by researchers concerned with nuclear reactor safety. For instance, mixing plays a relevant role in reactivity-induced accidents initiated by de-boration or boron dilution events, followed by transport of a de-borated slug into the vessel of a pressurized water reactor. Another example is constituted by temperature mixing, which may sensitively affect the consequences of a pressurized thermal shock scenario. Predictive analysis of mixing phenomena is strongly improved by the availability of computational tools able to cope with the inherent three-dimensionality ofmore » such problem, like system codes with three-dimensional capabilities, and Computational Fluid Dynamics (CFD) codes. The present paper deals with numerical analyses of coolant mixing in the reactor pressure vessel of a VVER-1000 reactor, performed by the ANSYS CFX-10 CFD code. In particular, the 'swirl' effect that has been observed to take place in the downcomer of such kind of reactor has been addressed, with the aim of assessing the capability of the codes to predict that effect, and to understand the reasons for its occurrence. Results have been compared against experimental data from V1000CT-2 Benchmark. Moreover, a boron mixing problem has been investigated, in the hypothesis that a de-borated slug, transported by natural circulation, enters the vessel. Sensitivity analyses have been conducted on some geometrical features, model parameters and boundary conditions. (authors)« less

  10. An innovative expression model of human health risk based on the quantitative analysis of soil metals sources contribution in different spatial scales.

    PubMed

    Zhang, Yimei; Li, Shuai; Wang, Fei; Chen, Zhuang; Chen, Jie; Wang, Liqun

    2018-09-01

    Toxicity of heavy metals from industrialization poses critical concern, and analysis of sources associated with potential human health risks is of unique significance. Assessing human health risk of pollution sources (factored health risk) concurrently in the whole and the sub region can provide more instructive information to protect specific potential victims. In this research, we establish a new expression model of human health risk based on quantitative analysis of sources contribution in different spatial scales. The larger scale grids and their spatial codes are used to initially identify the level of pollution risk, the type of pollution source and the sensitive population at high risk. The smaller scale grids and their spatial codes are used to identify the contribution of various sources of pollution to each sub region (larger grid) and to assess the health risks posed by each source for each sub region. The results of case study show that, for children (sensitive populations, taking school and residential area as major region of activity), the major pollution source is from the abandoned lead-acid battery plant (ALP), traffic emission and agricultural activity. The new models and results of this research present effective spatial information and useful model for quantifying the hazards of source categories and human health a t complex industrial system in the future. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. The sensitivity and specificity of frozen-section histopathology in the management of benign oral and maxillofacial lesions.

    PubMed

    Aronovich, Sharon; Kim, Roderick Y

    2014-05-01

    The management of odontogenic cysts and tumors typically requires a biopsy, which may present significant challenges and prompt an additional visit to the operating room before definitive treatment. The aim of this study was to determine the validity of frozen-section diagnosis in the management of benign oral and maxillofacial lesions, allowing intraoperative diagnosis followed by definitive treatment under the same general anesthetic. A retrospective chart review of patients treated at the University of Michigan Health System was performed. Patients of all ages who had a diagnosis of a benign maxillofacial lesion by frozen-section and permanent histopathology reports were included for analysis. Patients were identified using the Current Procedural Terminology code for enucleation and curettage and International Classification of Diseases, Ninth Revision codes for benign cysts or tumors of skull, face, or lower jaw. Of 450 patients reviewed, 214 had intraoperative frozen-section examination available for comparison with permanent histopathology. There were 121 men (56.5%) and 93 women (43.5%), with a mean age of 41 years. Compared with final permanent histopathology, the overall sensitivity of frozen sections was 92.1%. Frozen-section histopathology had a sensitivity greater than 90% and a specificity greater than 95% for the diagnosis of dentigerous cyst and keratocyst odontogenic tumor. In this study of 214 patients with benign maxillofacial lesions, frozen-section histopathology was found to be a valid diagnostic modality with high sensitivity, specificity, and positive and negative predictive values. These results and analysis support the use of frozen-section histopathology for the treatment of benign maxillofacial lesions and underscore its value in the management of these lesions. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  12. LSENS - GENERAL CHEMICAL KINETICS AND SENSITIVITY ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.

    1994-01-01

    LSENS has been developed for solving complex, homogeneous, gas-phase, chemical kinetics problems. The motivation for the development of this program is the continuing interest in developing detailed chemical reaction mechanisms for complex reactions such as the combustion of fuels and pollutant formation and destruction. A reaction mechanism is the set of all elementary chemical reactions that are required to describe the process of interest. Mathematical descriptions of chemical kinetics problems constitute sets of coupled, nonlinear, first-order ordinary differential equations (ODEs). The number of ODEs can be very large because of the numerous chemical species involved in the reaction mechanism. Further complicating the situation are the many simultaneous reactions needed to describe the chemical kinetics of practical fuels. For example, the mechanism describing the oxidation of the simplest hydrocarbon fuel, methane, involves over 25 species participating in nearly 100 elementary reaction steps. Validating a chemical reaction mechanism requires repetitive solutions of the governing ODEs for a variety of reaction conditions. Analytical solutions to the systems of ODEs describing chemistry are not possible, except for the simplest cases, which are of little or no practical value. Consequently, there is a need for fast and reliable numerical solution techniques for chemical kinetics problems. In addition to solving the ODEs describing chemical kinetics, it is often necessary to know what effects variations in either initial condition values or chemical reaction mechanism parameters have on the solution. Such a need arises in the development of reaction mechanisms from experimental data. The rate coefficients are often not known with great precision and in general, the experimental data are not sufficiently detailed to accurately estimate the rate coefficient parameters. The development of a reaction mechanism is facilitated by a systematic sensitivity analysis which provides the relationships between the predictions of a kinetics model and the input parameters of the problem. LSENS provides for efficient and accurate chemical kinetics computations and includes sensitivity analysis for a variety of problems, including nonisothermal conditions. LSENS replaces the previous NASA general chemical kinetics codes GCKP and GCKP84. LSENS is designed for flexibility, convenience and computational efficiency. A variety of chemical reaction models can be considered. The models include static system, steady one-dimensional inviscid flow, reaction behind an incident shock wave including boundary layer correction, and the perfectly stirred (highly backmixed) reactor. In addition, computations of equilibrium properties can be performed for the following assigned states, enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static problems LSENS computes sensitivity coefficients with respect to the initial values of the dependent variables and/or the three rates coefficient parameters of each chemical reaction. To integrate the ODEs describing chemical kinetics problems, LSENS uses the packaged code LSODE, the Livermore Solver for Ordinary Differential Equations, because it has been shown to be the most efficient and accurate code for solving such problems. The sensitivity analysis computations use the decoupled direct method, as implemented by Dunker and modified by Radhakrishnan. This method has shown greater efficiency and stability with equal or better accuracy than other methods of sensitivity analysis. LSENS is written in FORTRAN 77 with the exception of the NAMELIST extensions used for input. While this makes the code fairly machine independent, execution times on IBM PC compatibles would be unacceptable to most users. LSENS has been successfully implemented on a Sun4 running SunOS and a DEC VAX running VMS. With minor modifications, it should also be easily implemented on other platforms with FORTRAN compilers which support NAMELIST input. LSENS required 4Mb of RAM under SunOS 4.1.1 and 3.4Mb of RAM under VMS 5.5.1. The standard distribution medium for LSENS is a .25 inch streaming magnetic tape cartridge (QIC-24) in UNIX tar format. It is also available on a 1600 BPI 9-track magnetic tape or a TK50 tape cartridge in DEC VAX BACKUP format. Alternate distribution media and formats are available upon request. LSENS was developed in 1992.

  13. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  14. Anisotropic constitutive model for nickel base single crystal alloys: Development and finite element implementation

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Stouffer, D. C.

    1986-01-01

    A tool for the mechanical analysis of nickel base single crystal superalloys, specifically Rene N4, used in gas turbine engine components is developed. This is achieved by a rate dependent anisotropic constitutive model implemented in a nonlinear three dimensional finite element code. The constitutive model is developed from metallurigical concepts utilizing a crystallographic approach. A non Schmid's law formulation is used to model the tension/compression asymmetry and orientation dependence in octahedral slip. Schmid's law is a good approximation to the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response, and strain rate sensitivity of these alloys. Methods for deriving the material constants from standard tests are presented. The finite element implementation utilizes an initial strain method and twenty noded isoparametric solid elements. The ability to model piecewise linear load histories is included in the finite element code. The constitutive equations are accurately and economically integrated using a second order Adams-Moulton predictor-corrector method with a dynamic time incrementing procedure. Computed results from the finite element code are compared with experimental data for tensile, creep and cyclic tests at 760 deg C. The strain rate sensitivity and stress relaxation capabilities of the model are evaluated.

  15. Sensitivity Analysis to Turbulent Combustion Models for Combustor-Turbine Interactions

    NASA Astrophysics Data System (ADS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2017-11-01

    The recently-updated Open National CombustionCode (Open NCC) equipped with alarge-eddy simulation (LES) is applied to model the flow field inside the Energy Efficient Engine (EEE) in conjunction with sensitivity analysis to turbulent combustion models. In this study, we consider three different turbulence-combustion interaction models, the Eddy-Breakup model (EBU), the Linear-Eddy Model (LEM) and the Probability Density Function (PDF)model as well as the laminar chemistry model. Acomprehensive comparison of the flow field and the flame structure will be provided. One of our main interests isto understand how a different model predicts thermal variation on the surface of the first stage vane. Considering that these models are often used in combustor/turbine communities, this study should provide some guidelines on numerical modeling of combustor-turbine interactions.

  16. Chemical derivatization for enhancing sensitivity during LC/ESI-MS/MS quantification of steroids in biological samples: a review.

    PubMed

    Higashi, Tatsuya; Ogawa, Shoujiro

    2016-09-01

    Sensitive and specific methods for the detection, characterization and quantification of endogenous steroids in body fluids or tissues are necessary for the diagnosis, pathological analysis and treatment of many diseases. Recently, liquid chromatography/electrospray ionization-tandem mass spectrometry (LC/ESI-MS/MS) has been widely used for these purposes due to its specificity and versatility. However, the ESI efficiency and fragmentation behavior of some steroids are poor, which lead to a low sensitivity. Chemical derivatization is one of the most effective methods to improve the detection characteristics of steroids in ESI-MS/MS. Based on this background, this article reviews the recent advances in chemical derivatization for the trace quantification of steroids in biological samples by LC/ESI-MS/MS. The derivatization in ESI-MS/MS is based on tagging a proton-affinitive or permanently charged moiety on the target steroid. Introduction/formation of a fragmentable moiety suitable for the selected reaction monitoring by the derivatization also enhances the sensitivity. The stable isotope-coded derivatization procedures for the steroid analysis are also described. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    NASA Technical Reports Server (NTRS)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  18. Can color-coded parametric maps improve dynamic enhancement pattern analysis in MR mammography?

    PubMed

    Baltzer, P A; Dietzel, M; Vag, T; Beger, S; Freiberg, C; Herzog, A B; Gajda, M; Camara, O; Kaiser, W A

    2010-03-01

    Post-contrast enhancement characteristics (PEC) are a major criterion for differential diagnosis in MR mammography (MRM). Manual placement of regions of interest (ROIs) to obtain time/signal intensity curves (TSIC) is the standard approach to assess dynamic enhancement data. Computers can automatically calculate the TSIC in every lesion voxel and combine this data to form one color-coded parametric map (CCPM). Thus, the TSIC of the whole lesion can be assessed. This investigation was conducted to compare the diagnostic accuracy (DA) of CCPM with TSIC for the assessment of PEC. 329 consecutive patients with 469 histologically verified lesions were examined. MRM was performed according to a standard protocol (1.5 T, 0.1 mmol/kgbw Gd-DTPA). ROIs were drawn manually within any lesion to calculate the TSIC. CCPMs were created in all patients using dedicated software (CAD Sciences). Both methods were rated by 2 observers in consensus on an ordinal scale. Receiver operating characteristics (ROC) analysis was used to compare both methods. The area under the curve (AUC) was significantly (p=0.026) higher for CCPM (0.829) than TSIC (0.749). The sensitivity was 88.5% (CCPM) vs. 82.8% (TSIC), whereas equal specificity levels were found (CCPM: 63.7%, TSIC: 63.0%). The color-coded parametric maps (CCPMs) showed a significantly higher DA compared to TSIC, in particular the sensitivity could be increased. Therefore, the CCPM method is a feasible approach to assessing dynamic data in MRM and condenses several imaging series into one parametric map. © Georg Thieme Verlag KG Stuttgart · New York.

  19. End-to-end imaging information rate advantages of various alternative communication systems

    NASA Technical Reports Server (NTRS)

    Rice, R. F.

    1982-01-01

    The efficiency of various deep space communication systems which are required to transmit both imaging and a typically error sensitive class of data called general science and engineering (gse) are compared. The approach jointly treats the imaging and gse transmission problems, allowing comparisons of systems which include various channel coding and data compression alternatives. Actual system comparisons include an advanced imaging communication system (AICS) which exhibits the rather significant advantages of sophisticated data compression coupled with powerful yet practical channel coding. For example, under certain conditions the improved AICS efficiency could provide as much as two orders of magnitude increase in imaging information rate compared to a single channel uncoded, uncompressed system while maintaining the same gse data rate in both systems. Additional details describing AICS compression and coding concepts as well as efforts to apply them are provided in support of the system analysis.

  20. 76 FR 36342 - 2-methyl-2,4-pentanediol; Exemption from the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-22

    ... skin sensitizer in guinea pigs. It has low inhalation toxicity, with an LC 50 of 160 parts per million...: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code...

  1. Enabling Handicapped Nonreaders to Independently Obtain Information: Initial Development of an Inexpensive Bar Code Reader System.

    ERIC Educational Resources Information Center

    VanBiervliet, Alan

    A project to develop and evaluate a bar code reader system as a self-directed information and instructional aid for handicapped nonreaders is described. The bar code technology involves passing a light sensitive pen or laser over a printed code with bars which correspond to coded numbers. A system would consist of a portable device which could…

  2. User's Guide for RESRAD-OFFSITE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gnanapragasam, E.; Yu, C.

    2015-04-01

    The RESRAD-OFFSITE code can be used to model the radiological dose or risk to an offsite receptor. This User’s Guide for RESRAD-OFFSITE Version 3.1 is an update of the User’s Guide for RESRAD-OFFSITE Version 2 contained in the Appendix A of the User’s Manual for RESRAD-OFFSITE Version 2 (ANL/EVS/TM/07-1, DOE/HS-0005, NUREG/CR-6937). This user’s guide presents the basic information necessary to use Version 3.1 of the code. It also points to the help file and other documents that provide more detailed information about the inputs, the input forms and features/tools in the code; two of the features (overriding the source termmore » and computing area factors) are discussed in the appendices to this guide. Section 2 describes how to download and install the code and then verify the installation of the code. Section 3 shows ways to navigate through the input screens to simulate various exposure scenarios and to view the results in graphics and text reports. Section 4 has screen shots of each input form in the code and provides basic information about each parameter to increase the user’s understanding of the code. Section 5 outlines the contents of all the text reports and the graphical output. It also describes the commands in the two output viewers. Section 6 deals with the probabilistic and sensitivity analysis tools available in the code. Section 7 details the various ways of obtaining help in the code.« less

  3. Cognitive Sensitivity in Sibling Interactions: Development of the Construct and Comparison of Two Coding Methodologies

    ERIC Educational Resources Information Center

    Prime, Heather; Perlman, Michal; Tackett, Jennifer L.; Jenkins, Jennifer M.

    2014-01-01

    Research Findings: The goal of this study was to develop a construct of sibling cognitive sensitivity, which describes the extent to which children take their siblings' knowledge and cognitive abilities into account when working toward a joint goal. In addition, the study compared 2 coding methodologies for measuring the construct: a thin…

  4. Identifying Vasopressor and Inotrope Use for Health Services Research

    PubMed Central

    Fawzy, Ashraf; Bradford, Mark; Lindenauer, Peter K.

    2016-01-01

    Rationale: Identifying vasopressor and inotrope (vasopressor) use from administrative claims data may provide an important resource to study the epidemiology of shock. Objectives: Determine accuracy of identifying vasopressor use using International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) coding. Methods: Using administrative data enriched with pharmacy billing files (Premier, Inc., Charlotte, NC), we identified two cohorts: adult patients admitted with a diagnosis of sepsis from 2010 to 2013 or pulmonary embolism (PE) from 2008 to 2011. Vasopressor administration was obtained using pharmacy billing files (dopamine, dobutamine, epinephrine, milrinone, norepinephrine, phenylephrine, vasopressin) and compared with ICD-9-CM procedure code for vasopressor administration (00.17). We estimated performance characteristics of the ICD-9-CM code and compared patients’ characteristics and mortality rates according to vasopressor identification method. Measurements and Main Results: Using either pharmacy data or the ICD-9-CM procedure code, 29% of 541,144 patients in the sepsis cohort and 5% of 81,588 patients in the PE cohort were identified as receiving a vasopressor. In the sepsis cohort, the ICD-9-CM procedure code had low sensitivity (9.4%; 95% confidence interval, 9.2–9.5), which increased over time. Results were similar in the PE cohort (sensitivity, 5.8%; 95% confidence interval, 5.1–6.6). The ICD-9-CM code exhibited high specificity in the sepsis (99.8%) and PE (100%) cohorts. However, patients identified as receiving vasopressors by ICD-9-CM code had significantly higher unadjusted in-hospital mortality, had more acute organ failures, and were more likely hospitalized in the Northeast and West. Conclusions: The ICD-9-CM procedure code for vasopressor administration has low sensitivity and selects for higher severity of illness in studies of shock. Temporal changes in sensitivity would likely make longitudinal shock surveillance using ICD-9-CM inaccurate. PMID:26653145

  5. General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets

    NASA Technical Reports Server (NTRS)

    Marchen, Luis F.

    2011-01-01

    The Coronagraph Performance Error Budget (CPEB) tool automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. The tool uses a Code V prescription of the optical train, and uses MATLAB programs to call ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled fine-steering mirrors (FSMs). The sensitivity matrices are imported by macros into Excel 2007, where the error budget is evaluated. The user specifies the particular optics of interest, and chooses the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions, and combines that with the sensitivity matrices to generate an error budget for the system. CPEB also contains a combination of form and ActiveX controls with Visual Basic for Applications code to allow for user interaction in which the user can perform trade studies such as changing engineering requirements, and identifying and isolating stringent requirements. It contains summary tables and graphics that can be instantly used for reporting results in view graphs. The entire process to obtain a coronagraphic telescope performance error budget has been automated into three stages: conversion of optical prescription from Zemax or Code V to MACOS (in-house optical modeling and analysis tool), a linear models process, and an error budget tool process. The first process was improved by developing a MATLAB package based on the Class Constructor Method with a number of user-defined functions that allow the user to modify the MACOS optical prescription. The second process was modified by creating a MATLAB package that contains user-defined functions that automate the process. The user interfaces with the process by utilizing an initialization file where the user defines the parameters of the linear model computations. Other than this, the process is fully automated. The third process was developed based on the Terrestrial Planet Finder coronagraph Error Budget Tool, but was fully automated by using VBA code, form, and ActiveX controls.

  6. Probabilistic Analysis of Large-Scale Composite Structures Using the IPACS Code

    NASA Technical Reports Server (NTRS)

    Lemonds, Jeffrey; Kumar, Virendra

    1995-01-01

    An investigation was performed to ascertain the feasibility of using IPACS (Integrated Probabilistic Assessment of Composite Structures) for probabilistic analysis of a composite fan blade, the development of which is being pursued by various industries for the next generation of aircraft engines. A model representative of the class of fan blades used in the GE90 engine has been chosen as the structural component to be analyzed with IPACS. In this study, typical uncertainties are assumed in the level, and structural responses for ply stresses and frequencies are evaluated in the form of cumulative probability density functions. Because of the geometric complexity of the blade, the number of plies varies from several hundred at the root to about a hundred at the tip. This represents a extremely complex composites application for the IPACS code. A sensitivity study with respect to various random variables is also performed.

  7. Design of self-coded combinatorial libraries to facilitate direct analysis of ligands by mass spectrometry.

    PubMed

    Hughes, I

    1998-09-24

    The direct analysis of selected components from combinatorial libraries by sensitive methods such as mass spectrometry is potentially more efficient than deconvolution and tagging strategies since additional steps of resynthesis or introduction of molecular tags are avoided. A substituent selection procedure is described that eliminates the mass degeneracy commonly observed in libraries prepared by "split-and-mix" methods, without recourse to high-resolution mass measurements. A set of simple rules guides the choice of substituents such that all components of the library have unique nominal masses. Additional rules extend the scope by ensuring that characteristic isotopic mass patterns distinguish isobaric components. The method is applicable to libraries having from two to four varying substituent groups and can encode from a few hundred to several thousand components. No restrictions are imposed on the manner in which the "self-coded" library is synthesized or screened.

  8. A universal Model-R Coupler to facilitate the use of R functions for model calibration and analysis

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Yan, Wende

    2014-01-01

    Mathematical models are useful in various fields of science and engineering. However, it is a challenge to make a model utilize the open and growing functions (e.g., model inversion) on the R platform due to the requirement of accessing and revising the model's source code. To overcome this barrier, we developed a universal tool that aims to convert a model developed in any computer language to an R function using the template and instruction concept of the Parameter ESTimation program (PEST) and the operational structure of the R-Soil and Water Assessment Tool (R-SWAT). The developed tool (Model-R Coupler) is promising because users of any model can connect an external algorithm (written in R) with their model to implement various model behavior analyses (e.g., parameter optimization, sensitivity and uncertainty analysis, performance evaluation, and visualization) without accessing or modifying the model's source code.

  9. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  10. 75 FR 19261 - Alkyl (C12-C16) Dimethyl Ammonio Acetate; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-04-14

    ..., it gave a negative response for skin sensitization in in vivo guinea pigs as determined by Magnusson.... Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532...

  11. 75 FR 42318 - Poly(oxy-1,2-ethanediyl), α-isotridecyl-ω-methoxy; Exemption from the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-21

    ....2600) in guinea pigs showed skin sensitization when exposed to poly(oxy-1,2-ethanediyl), [alpha.... Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532...

  12. SPS market analysis

    NASA Astrophysics Data System (ADS)

    Goff, H. C.

    1980-05-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  13. The Initial Atmospheric Transport (IAT) Code: Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morrow, Charles W.; Bartel, Timothy James

    The Initial Atmospheric Transport (IAT) computer code was developed at Sandia National Laboratories as part of their nuclear launch accident consequences analysis suite of computer codes. The purpose of IAT is to predict the initial puff/plume rise resulting from either a solid rocket propellant or liquid rocket fuel fire. The code generates initial conditions for subsequent atmospheric transport calculations. The Initial Atmospheric Transfer (IAT) code has been compared to two data sets which are appropriate to the design space of space launch accident analyses. The primary model uncertainties are the entrainment coefficients for the extended Taylor model. The Titan 34Dmore » accident (1986) was used to calibrate these entrainment settings for a prototypic liquid propellant accident while the recent Johns Hopkins University Applied Physics Laboratory (JHU/APL, or simply APL) large propellant block tests (2012) were used to calibrate the entrainment settings for prototypic solid propellant accidents. North American Meteorology (NAM )formatted weather data profiles are used by IAT to determine the local buoyancy force balance. The IAT comparisons for the APL solid propellant tests illustrate the sensitivity of the plume elevation to the weather profiles; that is, the weather profile is a dominant factor in determining the plume elevation. The IAT code performed remarkably well and is considered validated for neutral weather conditions.« less

  14. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  15. Using Administrative Mental Health Indicators in Heart Failure Outcomes Research: Comparison of Clinical Records and International Classification of Disease Coding.

    PubMed

    Bender, Miriam; Smith, Tyler C

    2016-01-01

    Use of mental indication in health outcomes research is of growing interest to researchers. This study, as part of a larger research program, quantified agreement between administrative International Classification of Disease (ICD-9) coding for, and "gold standard" clinician documentation of, mental health issues (MHIs) in hospitalized heart failure (HF) patients to determine the validity of mental health administrative data for use in HF outcomes research. A 13% random sample (n = 504) was selected from all unique patients (n = 3,769) hospitalized with a primary HF diagnosis at 4 San Diego County community hospitals during 2009-2012. MHI was defined as ICD-9 discharge diagnostic coding 290-319. Records were audited for clinician documentation of MHI. A total of 43% (n = 216) had mental health clinician documentation; 33% (n = 164) had ICD-9 coding for MHI. ICD-9 code bundle 290-319 had 0.70 sensitivity, 0.97 specificity, and kappa 0.69 (95% confidence interval 0.61-0.79). More specific ICD-9 MHI code bundles had kappas ranging from 0.44 to 0.82 and sensitivities ranging from 42% to 82%. Agreement between ICD-9 coding and clinician documentation for a broadly defined MHI is substantial, and can validly "rule in" MHI for hospitalized patients with heart failure. More specific MHI code bundles had fair to almost perfect agreement, with a wide range of sensitivities for identifying patients with an MHI. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Uncertainty analysis on reactivity and discharged inventory for a pressurized water reactor fuel assembly due to {sup 235,238}U nuclear data uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Da Cruz, D. F.; Rochman, D.; Koning, A. J.

    2012-07-01

    This paper discusses the uncertainty analysis on reactivity and inventory for a typical PWR fuel element as a result of uncertainties in {sup 235,238}U nuclear data. A typical Westinghouse 3-loop fuel assembly fuelled with UO{sub 2} fuel with 4.8% enrichment has been selected. The Total Monte-Carlo method has been applied using the deterministic transport code DRAGON. This code allows the generation of the few-groups nuclear data libraries by directly using data contained in the nuclear data evaluation files. The nuclear data used in this study is from the JEFF3.1 evaluation, and the nuclear data files for {sup 238}U and {supmore » 235}U (randomized for the generation of the various DRAGON libraries) are taken from the nuclear data library TENDL. The total uncertainty (obtained by randomizing all {sup 238}U and {sup 235}U nuclear data in the ENDF files) on the reactor parameters has been split into different components (different nuclear reaction channels). Results show that the TMC method in combination with a deterministic transport code constitutes a powerful tool for performing uncertainty and sensitivity analysis of reactor physics parameters. (authors)« less

  17. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  18. A numerical similarity approach for using retired Current Procedural Terminology (CPT) codes for electronic phenotyping in the Scalable Collaborative Infrastructure for a Learning Health System (SCILHS).

    PubMed

    Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N

    2015-12-11

    Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.

  19. Risk Factors for Hearing Decrement Among U.S. Air Force Aviation-Related Personnel.

    PubMed

    Greenwell, Brandon M; Tvaryanas, Anthony P; Maupin, Genny M

    2018-02-01

    The purpose of this study was to analyze historical hearing sensitivity data to determine factors associated with an occupationally significant change in hearing sensitivity in U.S. Air Force aviation-related personnel. This study was a longitudinal, retrospective cohort analysis of audiogram records for Air Force aviation-related personnel on active duty during calendar year 2013 without a diagnosis of non-noise-related hearing loss. The outcomes of interest were raw change in hearing sensitivity from initial baseline to 2013 audiogram and initial occurrence of a significant threshold shift (STS) and non-H1 audiogram profile. Potential predictor variables included age and elapsed time in cohort for each audiogram, gender, and Air Force Specialty Code. Random forest analyses conducted on a learning sample were used to identify relevant predictor variables. Mixed effects models were fitted to a separate validation sample to make statistical inferences. The final dataset included 167,253 nonbaseline audiograms on 10,567 participants. Only the interaction between time since baseline audiogram and age was significantly associated with raw change in hearing sensitivity by STS metric. None of the potential predictors were associated with the likelihood for an STS. Time since baseline audiogram, age, and their interaction were significantly associated with the likelihood for a non-HI hearing profile. In this study population, age and elapsed time since baseline audiogram were modestly associated with decreased hearing sensitivity and increased likelihood for a non-H1 hearing profile. Aircraft type, as determined from Air Force Specialty Code, was not associated with changes in hearing sensitivity by STS metric.Greenwell BM, Tvaryanas AP, Maupin GM. Risk factors for hearing decrement among U.S. Air Force aviation-related personnel. Aerosp Med Hum Perform. 2018; 89(2):80-86.

  20. Analytical modeling of intumescent coating thermal protection system in a JP-5 fuel fire environment

    NASA Technical Reports Server (NTRS)

    Clark, K. J.; Shimizu, A. B.; Suchsland, K. E.; Moyer, C. B.

    1974-01-01

    The thermochemical response of Coating 313 when exposed to a fuel fire environment was studied to provide a tool for predicting the reaction time. The existing Aerotherm Charring Material Thermal Response and Ablation (CMA) computer program was modified to treat swelling materials. The modified code is now designated Aerotherm Transient Response of Intumescing Materials (TRIM) code. In addition, thermophysical property data for Coating 313 were analyzed and reduced for use in the TRIM code. An input data sensitivity study was performed, and performance tests of Coating 313/steel substrate models were carried out. The end product is a reliable computational model, the TRIM code, which was thoroughly validated for Coating 313. The tasks reported include: generation of input data, development of swell model and implementation in TRIM code, sensitivity study, acquisition of experimental data, comparisons of predictions with data, and predictions with intermediate insulation.

  1. Testing Photoionization Calculations Using Chandra X-ray Spectra

    NASA Technical Reports Server (NTRS)

    Kallman, Tim

    2008-01-01

    A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.

  2. Performance Measures of Diagnostic Codes for Detecting Opioid Overdose in the Emergency Department.

    PubMed

    Rowe, Christopher; Vittinghoff, Eric; Santos, Glenn-Milo; Behar, Emily; Turner, Caitlin; Coffin, Phillip O

    2017-04-01

    Opioid overdose mortality has tripled in the United States since 2000 and opioids are responsible for more than half of all drug overdose deaths, which reached an all-time high in 2014. Opioid overdoses resulting in death, however, represent only a small fraction of all opioid overdose events and efforts to improve surveillance of this public health problem should include tracking nonfatal overdose events. International Classification of Disease (ICD) diagnosis codes, increasingly used for the surveillance of nonfatal drug overdose events, have not been rigorously assessed for validity in capturing overdose events. The present study aimed to validate the use of ICD, 9th revision, Clinical Modification (ICD-9-CM) codes in identifying opioid overdose events in the emergency department (ED) by examining multiple performance measures, including sensitivity and specificity. Data on ED visits from January 1, 2012, to December 31, 2014, including clinical determination of whether the visit constituted an opioid overdose event, were abstracted from electronic medical records for patients prescribed long-term opioids for pain from any of six safety net primary care clinics in San Francisco, California. Combinations of ICD-9-CM codes were validated in the detection of overdose events as determined by medical chart review. Both sensitivity and specificity of different combinations of ICD-9-CM codes were calculated. Unadjusted logistic regression models with robust standard errors and accounting for clustering by patient were used to explore whether overdose ED visits with certain characteristics were more or less likely to be assigned an opioid poisoning ICD-9-CM code by the documenting physician. Forty-four (1.4%) of 3,203 ED visits among 804 patients were determined to be opioid overdose events. Opioid-poisoning ICD-9-CM codes (E850.2-E850.2, 965.00-965.09) identified overdose ED visits with a sensitivity of 25.0% (95% confidence interval [CI] = 13.6% to 37.8%) and specificity of 99.9% (95% CI = 99.8% to 100.0%). Expanding the ICD-9-CM codes to include both nonspecified and general (i.e., without a decimal modifier) drug poisoning and drug abuse codes identified overdose ED visits with a sensitivity of 56.8% (95% CI = 43.6%-72.7%) and specificity of 96.2% (95% CI = 94.8%-97.2%). Additional ICD-9-CM codes not explicitly relevant to opioid overdose were necessary to further enhance sensitivity. Among the 44 overdose ED visits, neither naloxone administration during the visit, whether the patient responded to the naloxone, nor the specific opioids involved were associated with the assignment of an opioid poisoning ICD-9-CM code (p ≥ 0.05). Tracking opioid overdose ED visits by diagnostic coding is fairly specific but insensitive, and coding was not influenced by administration of naloxone or the specific opioids involved. The reason for the high rate of missed cases is uncertain, although these results suggest that a more clearly defined case definition for overdose may be necessary to ensure effective opioid overdose surveillance. Changes in coding practices under ICD-10 might help to address these deficiencies. © 2016 by the Society for Academic Emergency Medicine.

  3. (U) Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses Using Ray-Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Favorite, Jeffrey A.

    The Second-Level Adjoint Sensitivity System (2nd-LASS) that yields the second-order sensitivities of a response of uncollided particles with respect to isotope densities, cross sections, and source emission rates is derived in Refs. 1 and 2. In Ref. 2, we solved problems for the uncollided leakage from a homogeneous sphere and a multiregion cylinder using the PARTISN multigroup discrete-ordinates code. In this memo, we derive solutions of the 2nd-LASS for the particular case when the response is a flux or partial current density computed at a single point on the boundary, and the inner products are computed using ray-tracing. Both themore » PARTISN approach and the ray-tracing approach are implemented in a computer code, SENSPG. The next section of this report presents the equations of the 1st- and 2nd-LASS for uncollided particles and the first- and second-order sensitivities that use the solutions of the 1st- and 2nd-LASS. Section III presents solutions of the 1st- and 2nd-LASS equations for the case of ray-tracing from a detector point. Section IV presents specific solutions of the 2nd-LASS and derives the ray-trace form of the inner products needed for second-order sensitivities. Numerical results for the total leakage from a homogeneous sphere are presented in Sec. V and for the leakage from one side of a two-region slab in Sec. VI. Section VII is a summary and conclusions.« less

  4. Development of an agility assessment module for preliminary fighter design

    NASA Technical Reports Server (NTRS)

    Ngan, Angelen; Bauer, Brent; Biezad, Daniel; Hahn, Andrew

    1996-01-01

    A FORTRAN computer program is presented to perform agility analysis on fighter aircraft configurations. This code is one of the modules of the NASA Ames ACSYNT (AirCraft SYNThesis) design code. The background of the agility research in the aircraft industry and a survey of a few agility metrics are discussed. The methodology, techniques, and models developed for the code are presented. FORTRAN programs were developed for two specific metrics, CCT (Combat Cycle Time) and PM (Pointing Margin), as part of the agility module. The validity of the code was evaluated by comparing with existing flight test data. Example trade studies using the agility module along with ACSYNT were conducted using Northrop F-20 Tigershark and McDonnell Douglas F/A-18 Hornet aircraft models. The sensitivity of thrust loading and wing loading on agility criteria were investigated. The module can compare the agility potential between different configurations and has the capability to optimize agility performance in the preliminary design process. This research provides a new and useful design tool for analyzing fighter performance during air combat engagements.

  5. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Technical Reports Server (NTRS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-01-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  6. NESTEM-QRAS: A Tool for Estimating Probability of Failure

    NASA Astrophysics Data System (ADS)

    Patel, Bhogilal M.; Nagpal, Vinod K.; Lalli, Vincent A.; Pai, Shantaram; Rusick, Jeffrey J.

    2002-10-01

    An interface between two NASA GRC specialty codes, NESTEM and QRAS has been developed. This interface enables users to estimate, in advance, the risk of failure of a component, a subsystem, and/or a system under given operating conditions. This capability would be able to provide a needed input for estimating the success rate for any mission. NESTEM code, under development for the last 15 years at NASA Glenn Research Center, has the capability of estimating probability of failure of components under varying loading and environmental conditions. This code performs sensitivity analysis of all the input variables and provides their influence on the response variables in the form of cumulative distribution functions. QRAS, also developed by NASA, assesses risk of failure of a system or a mission based on the quantitative information provided by NESTEM or other similar codes, and user provided fault tree and modes of failure. This paper will describe briefly, the capabilities of the NESTEM, QRAS and the interface. Also, in this presentation we will describe stepwise process the interface uses using an example.

  7. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  8. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  9. A coding single-nucleotide polymorphism in lysine demethylase KDM4A associates with increased sensitivity to mTOR inhibitors.

    PubMed

    Van Rechem, Capucine; Black, Joshua C; Greninger, Patricia; Zhao, Yang; Donado, Carlos; Burrowes, Paul D; Ladd, Brendon; Christiani, David C; Benes, Cyril H; Whetstine, Johnathan R

    2015-03-01

    SNPs occur within chromatin-modulating factors; however, little is known about how these variants within the coding sequence affect cancer progression or treatment. Therefore, there is a need to establish their biochemical and/or molecular contribution, their use in subclassifying patients, and their impact on therapeutic response. In this report, we demonstrate that coding SNP-A482 within the lysine tridemethylase gene KDM4A/JMJD2A has different allelic frequencies across ethnic populations, associates with differential outcome in patients with non-small cell lung cancer (NSCLC), and promotes KDM4A protein turnover. Using an unbiased drug screen against 87 preclinical and clinical compounds, we demonstrate that homozygous SNP-A482 cells have increased mTOR inhibitor sensitivity. mTOR inhibitors significantly reduce SNP-A482 protein levels, which parallels the increased drug sensitivity observed with KDM4A depletion. Our data emphasize the importance of using variant status as candidate biomarkers and highlight the importance of studying SNPs in chromatin modifiers to achieve better targeted therapy. This report documents the first coding SNP within a lysine demethylase that associates with worse outcome in patients with NSCLC. We demonstrate that this coding SNP alters the protein turnover and associates with increased mTOR inhibitor sensitivity, which identifies a candidate biomarker for mTOR inhibitor therapy and a therapeutic target for combination therapy. ©2015 American Association for Cancer Research.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, H.; Fullwood, R.; Glancy, J.

    This is the second volume of a two volume report on the VISA method for evaluating safeguards at fixed-site facilities. This volume contains appendices that support the description of the VISA concept and the initial working version of the method, VISA-1, presented in Volume I. The information is separated into four appendices, each describing details of one of the four analysis modules that comprise the analysis sections of the method. The first appendix discusses Path Analysis methodology, applies it to a Model Fuel Facility, and describes the computer codes that are being used. Introductory material on Path Analysis given inmore » Chapter 3.2.1 and Chapter 4.2.1 of Volume I. The second appendix deals with Detection Analysis, specifically the schemes used in VISA-1 for classifying adversaries and the methods proposed for evaluating individual detection mechanisms in order to build the data base required for detection analysis. Examples of evaluations on identity-access systems, SNM portal monitors, and intrusion devices are provided. The third appendix describes the Containment Analysis overt-segment path ranking, the Monte Carlo engagement model, the network simulation code, the delay mechanism data base, and the results of a sensitivity analysis. The last appendix presents general equations used in Interruption Analysis for combining covert-overt segments and compares them with equations given in Volume I, Chapter 3.« less

  11. 75 FR 50914 - Flubendiamide; Pesticide Tolerances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... skin sensitizer under the conditions of the guinea pig maximization test. In the mammalian toxicology... activities: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS...

  12. Fluid Structure Interaction in a Turbine Blade

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.

    2004-01-01

    An unsteady, three dimensional Navier-Stokes solution in rotating frame formulation for turbomachinery applications is presented. Casting the governing equations in a rotating frame enabled the freezing of grid motion and resulted in substantial savings in computer time. The turbine blade was computationally simulated and probabilistically evaluated in view of several uncertainties in the aerodynamic, structural, material and thermal variables that govern the turbine blade. The interconnection between the computational fluid dynamics code and finite element structural analysis code was necessary to couple the thermal profiles with the structural design. The stresses and their variations were evaluated at critical points on the Turbine blade. Cumulative distribution functions and sensitivity factors were computed for stress responses due to aerodynamic, geometric, mechanical and thermal random variables.

  13. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  14. Using individual differences to test the role of temporal and place cues in coding frequency modulation

    PubMed Central

    Whiteford, Kelly L.; Oxenham, Andrew J.

    2015-01-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783

  15. Using individual differences to test the role of temporal and place cues in coding frequency modulation.

    PubMed

    Whiteford, Kelly L; Oxenham, Andrew J

    2015-11-01

    The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.

  16. 76 FR 26194 - Metarhizium anisopliae Strain F52; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-06

    ... sensitization--guinea pig (Harmonized Guideline 870.2600; MRID No. 448447-15). An acceptable dermal... pesticide manufacturer. Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  17. Identifying Pediatric Severe Sepsis and Septic Shock: Accuracy of Diagnosis Codes.

    PubMed

    Balamuth, Fran; Weiss, Scott L; Hall, Matt; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Centkowski, Sierra; Baumer-Mouradian, Shannon; Weiser, Jason; Hayes, Katie; Shah, Samir S; Alpern, Elizabeth R

    2015-12-01

    To evaluate accuracy of 2 established administrative methods of identifying children with sepsis using a medical record review reference standard. Multicenter retrospective study at 6 US children's hospitals. Subjects were children >60 days to <19 years of age and identified in 4 groups based on International Classification of Diseases, Ninth Revision, Clinical Modification codes: (1) severe sepsis/septic shock (sepsis codes); (2) infection plus organ dysfunction (combination codes); (3) subjects without codes for infection, organ dysfunction, or severe sepsis; and (4) infection but not severe sepsis or organ dysfunction. Combination codes were allowed, but not required within the sepsis codes group. We determined the presence of reference standard severe sepsis according to consensus criteria. Logistic regression was performed to determine whether addition of codes for sepsis therapies improved case identification. A total of 130 out of 432 subjects met reference SD of severe sepsis. Sepsis codes had sensitivity 73% (95% CI 70-86), specificity 92% (95% CI 87-95), and positive predictive value 79% (95% CI 70-86). Combination codes had sensitivity 15% (95% CI 9-22), specificity 71% (95% CI 65-76), and positive predictive value 18% (95% CI 11-27). Slight improvements in model characteristics were observed when codes for vasoactive medications and endotracheal intubation were added to sepsis codes (c-statistic 0.83 vs 0.87, P = .008). Sepsis specific International Classification of Diseases, Ninth Revision, Clinical Modification codes identify pediatric patients with severe sepsis in administrative data more accurately than a combination of codes for infection plus organ dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Genome-wide identification of long non-coding RNA and mRNA profiling using RNA sequencing in subjects with sensitive skin

    PubMed Central

    Tu, Ying; Xu, Dan; Feng, Jiaqi; He, Li

    2017-01-01

    Sensitive skin (SS) is a condition of subjective cutaneous hyper-reactivity. The role of long non-coding RNAs (lncRNAs) in subjects with SS is unclear. Therefore, the aim of the present study was to provide a comprehensive profile of the mRNAs and lncRNAs in subjects with SS. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis presented the characteristics of associated protein-coding genes. In addition, a co-expression network of lncRNA and mRNA was constructed to identify potential underlying regulation targets; the results were verified by quantitative real-time PCR (qRT-PCR) and RNA-seq analyses in patients with SS and normal samples. Compared with the normal skin group, 266 novel lncRNAs and 6750 annotated lncRNAs were identified in the SS group. A total of 71 lncRNA transcripts and 2615 mRNA transcripts were differentially expressed (P < 0.05). The heat signature of the SS samples could be distinguished from the normal skin samples, whereas the majority of the genes that were present in enriched pathways were those that participated in focal adhesion, PI3K-Akt signaling, and cancer-related pathways. Five transcripts were selected for qRT-PCR analysis and the results were consistent with RNA-seq. The results suggested that LNC_000265 may play a role in the epidermal barrier structure of patient with SS. The data suggest novel genes and pathways that may be involved in the pathogenesis of SS and highlight potential targets that could be used for individualized treatment applications. PMID:29383128

  19. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  20. Supersensitive detection and discrimination of enantiomers by dorsal olfactory receptors: evidence for hierarchical odour coding.

    PubMed

    Sato, Takaaki; Kobayakawa, Reiko; Kobayakawa, Ko; Emura, Makoto; Itohara, Shigeyoshi; Kizumi, Miwako; Hamana, Hiroshi; Tsuboi, Akio; Hirono, Junzo

    2015-09-11

    Enantiomeric pairs of mirror-image molecular structures are difficult to resolve by instrumental analyses. The human olfactory system, however, discriminates (-)-wine lactone from its (+)-form rapidly within seconds. To gain insight into receptor coding of enantiomers, we compared behavioural detection and discrimination thresholds of wild-type mice with those of ΔD mice in which all dorsal olfactory receptors are genetically ablated. Surprisingly, wild-type mice displayed an exquisite "supersensitivity" to enantiomeric pairs of wine lactones and carvones. They were capable of supersensitive discrimination of enantiomers, consistent with their high detection sensitivity. In contrast, ΔD mice showed selective major loss of sensitivity to the (+)-enantiomers. The resulting 10(8)-fold differential sensitivity of ΔD mice to (-)- vs. (+)-wine lactone matched that observed in humans. This suggests that humans lack highly sensitive orthologous dorsal receptors for the (+)-enantiomer, similarly to ΔD mice. Moreover, ΔD mice showed >10(10)-fold reductions in enantiomer discrimination sensitivity compared to wild-type mice. ΔD mice detected one or both of the (-)- and (+)-enantiomers over a wide concentration range, but were unable to discriminate them. This "enantiomer odour discrimination paradox" indicates that the most sensitive dorsal receptors play a critical role in hierarchical odour coding for enantiomer identification.

  1. Supersensitive detection and discrimination of enantiomers by dorsal olfactory receptors: evidence for hierarchical odour coding

    PubMed Central

    Sato, Takaaki; Kobayakawa, Reiko; Kobayakawa, Ko; Emura, Makoto; Itohara, Shigeyoshi; Kizumi, Miwako; Hamana, Hiroshi; Tsuboi, Akio; Hirono, Junzo

    2015-01-01

    Enantiomeric pairs of mirror-image molecular structures are difficult to resolve by instrumental analyses. The human olfactory system, however, discriminates (−)-wine lactone from its (+)-form rapidly within seconds. To gain insight into receptor coding of enantiomers, we compared behavioural detection and discrimination thresholds of wild-type mice with those of ΔD mice in which all dorsal olfactory receptors are genetically ablated. Surprisingly, wild-type mice displayed an exquisite “supersensitivity” to enantiomeric pairs of wine lactones and carvones. They were capable of supersensitive discrimination of enantiomers, consistent with their high detection sensitivity. In contrast, ΔD mice showed selective major loss of sensitivity to the (+)-enantiomers. The resulting 108-fold differential sensitivity of ΔD mice to (−)- vs. (+)-wine lactone matched that observed in humans. This suggests that humans lack highly sensitive orthologous dorsal receptors for the (+)-enantiomer, similarly to ΔD mice. Moreover, ΔD mice showed >1010-fold reductions in enantiomer discrimination sensitivity compared to wild-type mice. ΔD mice detected one or both of the (−)- and (+)-enantiomers over a wide concentration range, but were unable to discriminate them. This “enantiomer odour discrimination paradox” indicates that the most sensitive dorsal receptors play a critical role in hierarchical odour coding for enantiomer identification. PMID:26361056

  2. Benchmarking Exercises To Validate The Updated ELLWF GoldSim Slit Trench Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor, G. A.; Hiergesell, R. A.

    2013-11-12

    The Savannah River National Laboratory (SRNL) results of the 2008 Performance Assessment (PA) (WSRC, 2008) sensitivity/uncertainty analyses conducted for the trenches located in the EArea LowLevel Waste Facility (ELLWF) were subject to review by the United States Department of Energy (U.S. DOE) Low-Level Waste Disposal Facility Federal Review Group (LFRG) (LFRG, 2008). LFRG comments were generally approving of the use of probabilistic modeling in GoldSim to support the quantitative sensitivity analysis. A recommendation was made, however, that the probabilistic models be revised and updated to bolster their defensibility. SRS committed to addressing those comments and, in response, contracted with Neptunemore » and Company to rewrite the three GoldSim models. The initial portion of this work, development of Slit Trench (ST), Engineered Trench (ET) and Components-in-Grout (CIG) trench GoldSim models, has been completed. The work described in this report utilizes these revised models to test and evaluate the results against the 2008 PORFLOW model results. This was accomplished by first performing a rigorous code-to-code comparison of the PORFLOW and GoldSim codes and then performing a deterministic comparison of the two-dimensional (2D) unsaturated zone and three-dimensional (3D) saturated zone PORFLOW Slit Trench models against results from the one-dimensional (1D) GoldSim Slit Trench model. The results of the code-to-code comparison indicate that when the mechanisms of radioactive decay, partitioning of contaminants between solid and fluid, implementation of specific boundary conditions and the imposition of solubility controls were all tested using identical flow fields, that GoldSim and PORFLOW produce nearly identical results. It is also noted that GoldSim has an advantage over PORFLOW in that it simulates all radionuclides simultaneously - thus avoiding a potential problem as demonstrated in the Case Study (see Section 2.6). Hence, it was concluded that the follow-on work using GoldSim to develop 1D equivalent models of the PORFLOW multi-dimensional models was justified. The comparison of GoldSim 1D equivalent models to PORFLOW multi-dimensional models was made at two locations in the model domains - at the unsaturated-saturated zone interface and at the 100m point of compliance. PORFLOW model results from the 2008 PA were utilized to investigate the comparison. By making iterative adjustments to certain water flux terms in the GoldSim models it was possible to produce contaminant mass fluxes and water concentrations that were highly similar to the PORFLOW model results at the two locations where comparisons were made. Based on the ability of the GoldSim 1D trench models to produce mass flux and concentration curves that are sufficiently similar to multi-dimensional PORFLOW models for all of the evaluated radionuclides and their progeny, it is concluded that the use of the GoldSim 1D equivalent Slit and Engineered trenches models for further probabilistic sensitivity and uncertainty analysis of ELLWF trench units is justified. A revision to the original report was undertaken to correct mislabeling on the y-axes of the compliance point concentration graphs, to modify the terminology used to define the ''blended'' source term Case for the saturated zone to make it consistent with terminology used in the 2008 PA, and to make a more definitive statement regarding the justification of the use of the GoldSim 1D equivalent trench models for follow-on probabilistic sensitivity and uncertainty analysis.« less

  3. Buckling Load Calculations of the Isotropic Shell A-8 Using a High-Fidelity Hierarchical Approach

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.

    2002-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a test series of 7 isotropic shells carried out by Aristocrat and Babcock at Caltech is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called 'high fidelity analysis', where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  4. On a High-Fidelity Hierarchical Approach to Buckling Load Calculations

    NASA Technical Reports Server (NTRS)

    Arbocz, Johann; Starnes, James H.; Nemeth, Michael P.

    2001-01-01

    As a step towards developing a new design philosophy, one that moves away from the traditional empirical approach used today in design towards a science-based design technology approach, a recent test series of 5 composite shells carried out by Waters at NASA Langley Research Center is used. It is shown how the hierarchical approach to buckling load calculations proposed by Arbocz et al can be used to perform an approach often called "high fidelity analysis", where the uncertainties involved in a design are simulated by refined and accurate numerical methods. The Delft Interactive Shell DEsign COde (short, DISDECO) is employed for this hierarchical analysis to provide an accurate prediction of the critical buckling load of the given shell structure. This value is used later as a reference to establish the accuracy of the Level-3 buckling load predictions. As a final step in the hierarchical analysis approach, the critical buckling load and the estimated imperfection sensitivity of the shell are verified by conducting an analysis using a sufficiently refined finite element model with one of the current generation two-dimensional shell analysis codes with the advanced capabilities needed to represent both geometric and material nonlinearities.

  5. Digitally Enhanced Heterodyne Interferometry

    NASA Technical Reports Server (NTRS)

    Shaddock, Daniel; Ware, Brent; Lay, Oliver; Dubovitsky, Serge

    2010-01-01

    Spurious interference limits the performance of many interferometric measurements. Digitally enhanced interferometry (DEI) improves measurement sensitivity by augmenting conventional heterodyne interferometry with pseudo-random noise (PRN) code phase modulation. DEI effectively changes the measurement problem from one of hardware (optics, electronics), which may deteriorate over time, to one of software (modulation, digital signal processing), which does not. DEI isolates interferometric signals based on their delay. Interferometric signals are effectively time-tagged by phase-modulating the laser source with a PRN code. DEI improves measurement sensitivity by exploiting the autocorrelation properties of the PRN to isolate only the signal of interest and reject spurious interference. The properties of the PRN code determine the degree of isolation.

  6. A Quantitative Study of Oxygen as a Metabolic Regulator

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; LaManna, Joseph C.; Cabrera, Marco E.

    1999-01-01

    An acute reduction in oxygen (O2) delivery to a tissue is generally associated with a decrease in phosphocreatine, increases in ADP, NADH/NAD, and inorganic phosphate, increased rates of glycolysis and lactate production, and reduced rates of pyruvate and fatty acid oxidation. However, given the complexity of the human bioenergetic system and its components, it is difficult to determine quantitatively how cellular metabolic processes interact to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in tissue O2 availability affect the pathways of ATP synthesis and the metabolites that control these pathways. In this study, we extend a previously developed mathematical model of human bioenergetics to provide a physicochemical framework that permits quantitative understanding of O2 as a metabolic regulator. Specifically, the enhancement permits studying the effects of variations in tissue oxygenation and in parameters controlling the rate of cellular respiration on glycolysis, lactate production, and pyruvate oxidation. The whole body is described as a bioenergetic system consisting of metabolically distinct tissue/organ subsystems that exchange materials with the blood. In order to study the dynamic response of each subsystem to stimuli, we solve the ordinary differential equations describing the temporal evolution of metabolite levels, given the initial concentrations. The solver used in the present study is the packaged code LSODE, as implemented in the NASA Lewis kinetics and sensitivity analysis code, LSENS. A major advantage of LSENS is the efficient procedures supporting systematic sensitivity analysis, which provides the basic methods for studying parameter sensitivities (i.e., changes in model behavior due to parameter variation). Sensitivity analysis establishes relationships between model predictions and problem parameters (i.e., initial concentrations, rate coefficients, etc). It helps determine the effects of uncertainties or changes in these input parameters on the predictions, which ultimately are compared with experimental observations in order to validate the model. Sensitivity analysis can identify parameters that must be determined accurately because of their large effect on the model predictions and parameters that need not be known with great precision because they have little or no effect on the solution. This capability may prove to be important in optimizing the design of experiments, thereby reducing the use of animals. This approach can be applied to study the metabolic effects of reduced oxygen delivery to cardiac muscle due to local myocardial ischemia and the effects of acute hypoxia on brain metabolism. Other important applications of sensitivity analysis include identification of quantitatively relevant pathways and biochemical species within an overall mechanism, when examining the effects of a genetic anomaly or pathological state on energetic system components and whole system behavior.

  7. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  8. Comparison of ICD code-based diagnosis of obesity with measured obesity in children and the implications for health care cost estimates.

    PubMed

    Kuhle, Stefan; Kirk, Sara F L; Ohinmaa, Arto; Veugelers, Paul J

    2011-12-21

    Administrative health databases are a valuable research tool to assess health care utilization at the population level. However, their use in obesity research limited due to the lack of data on body weight. A potential workaround is to use the ICD code of obesity to identify obese individuals. The objective of the current study was to investigate the sensitivity and specificity of an ICD code-based diagnosis of obesity from administrative health data relative to the gold standard measured BMI. Linkage of a population-based survey with anthropometric measures in elementary school children in 2003 with longitudinal administrative health data (physician visits and hospital discharges 1992-2006) from the Canadian province of Nova Scotia. Measured obesity was defined based on the CDC cut-offs applied to the measured BMI. An ICD code-based diagnosis obesity was defined as one or more ICD-9 (278) or ICD-10 code (E66-E68) of obesity from a physician visit or a hospital stay. Sensitivity and specificity were calculated and health care cost estimates based on measured obesity and ICD-based obesity were compared. The sensitivity of an ICD code-based obesity diagnosis was 7.4% using ICD codes between 2002 and 2004. Those correctly identified had a higher BMI and had higher health care utilization and costs. An ICD diagnosis of obesity in Canadian administrative health data grossly underestimates the true prevalence of childhood obesity and overestimates the health care cost differential between obese and non-obese children.

  9. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  10. Molecular Genetic Characterization of Mutagenesis Using a Highly Sensitive Single-Stranded DNA Reporter System in Budding Yeast.

    PubMed

    Chan, Kin

    2018-01-01

    Mutations are permanent alterations to the coding content of DNA. They are starting material for the Darwinian evolution of species by natural selection, which has yielded an amazing diversity of life on Earth. Mutations can also be the fundamental basis of serious human maladies, most notably cancers. In this chapter, I describe a highly sensitive reporter system for the molecular genetic analysis of mutagenesis, featuring controlled generation of long stretches of single-stranded DNA in budding yeast cells. This system is ~100- to ~1000-fold more susceptible to mutation than conventional double-stranded DNA reporters, and is well suited for generating large mutational datasets to investigate the properties of mutagens.

  11. Multiplexed Detection of Cytokines Based on Dual Bar-Code Strategy and Single-Molecule Counting.

    PubMed

    Li, Wei; Jiang, Wei; Dai, Shuang; Wang, Lei

    2016-02-02

    Cytokines play important roles in the immune system and have been regarded as biomarkers. While single cytokine is not specific and accurate enough to meet the strict diagnosis in practice, in this work, we constructed a multiplexed detection method for cytokines based on dual bar-code strategy and single-molecule counting. Taking interferon-γ (IFN-γ) and tumor necrosis factor-α (TNF-α) as model analytes, first, the magnetic nanobead was functionalized with the second antibody and primary bar-code strands, forming a magnetic nanoprobe. Then, through the specific reaction of the second antibody and the antigen that fixed by the primary antibody, sandwich-type immunocomplex was formed on the substrate. Next, the primary bar-code strands as amplification units triggered multibranched hybridization chain reaction (mHCR), producing nicked double-stranded polymers with multiple branched arms, which were served as secondary bar-code strands. Finally, the secondary bar-code strands hybridized with the multimolecule labeled fluorescence probes, generating enhanced fluorescence signals. The numbers of fluorescence dots were counted one by one for quantification with epi-fluorescence microscope. By integrating the primary and secondary bar-code-based amplification strategy and the multimolecule labeled fluorescence probes, this method displayed an excellent sensitivity with the detection limits were both 5 fM. Unlike the typical bar-code assay that the bar-code strands should be released and identified on a microarray, this method is more direct. Moreover, because of the selective immune reaction and the dual bar-code mechanism, the resulting method could detect the two targets simultaneously. Multiple analysis in human serum was also performed, suggesting that our strategy was reliable and had a great potential application in early clinical diagnosis.

  12. An overview of the major changes in the 2002 APA Ethics Code.

    PubMed

    Knapp, Samuel; VandeCreek, Leon

    2003-06-01

    This article summarizes the major changes that were made to the 2002 Ethical Principles and Code of Conduct of the American Psychological Association. The 2002 Ethics Code retains the general format of the 1992 Ethics Code and does not radically alter the obligations of psychologists. One goal of the Ethics Committee Task Force was to reduce the potential of the Ethics Code to be used to unnecessarily punish psychologists. In addition, the revised Ethics Code expresses greater sensitivity to the needs of cultural and linguistic minorities and students. Shortcomings of the 2002 Ethics Code are discussed.

  13. Python package for model STructure ANalysis (pySTAN)

    NASA Astrophysics Data System (ADS)

    Van Hoey, Stijn; van der Kwast, Johannes; Nopens, Ingmar; Seuntjens, Piet

    2013-04-01

    The selection and identification of a suitable hydrological model structure is more than fitting parameters of a model structure to reproduce a measured hydrograph. The procedure is highly dependent on various criteria, i.e. the modelling objective, the characteristics and the scale of the system under investigation as well as the available data. Rigorous analysis of the candidate model structures is needed to support and objectify the selection of the most appropriate structure for a specific case (or eventually justify the use of a proposed ensemble of structures). This holds both in the situation of choosing between a limited set of different structures as well as in the framework of flexible model structures with interchangeable components. Many different methods to evaluate and analyse model structures exist. This leads to a sprawl of available methods, all characterized by different assumptions, changing conditions of application and various code implementations. Methods typically focus on optimization, sensitivity analysis or uncertainty analysis, with backgrounds from optimization, machine-learning or statistics amongst others. These methods also need an evaluation metric (objective function) to compare the model outcome with some observed data. However, for current methods described in literature, implementations are not always transparent and reproducible (if available at all). No standard procedures exist to share code and the popularity (and amount of applications) of the methods is sometimes more dependent on the availability than the merits of the method. Moreover, new implementations of existing methods are difficult to verify and the different theoretical backgrounds make it difficult for environmental scientists to decide about the usefulness of a specific method. A common and open framework with a large set of methods can support users in deciding about the most appropriate method. Hence, it enables to simultaneously apply and compare different methods on a fair basis. We developed and present pySTAN (python framework for STructure Analysis), a python package containing a set of functions for model structure evaluation to provide the analysis of (hydrological) model structures. A selected set of algorithms for optimization, uncertainty and sensitivity analysis is currently available, together with a set of evaluation (objective) functions and input distributions to sample from. The methods are implemented model-independent and the python language provides the wrapper functions to apply administer external model codes. Different objective functions can be considered simultaneously with both statistical metrics and more hydrology specific metrics. By using so-called reStructuredText (sphinx documentation generator) and Python documentation strings (docstrings), the generation of manual pages is semi-automated and a specific environment is available to enhance both the readability and transparency of the code. It thereby enables a larger group of users to apply and compare these methods and to extend the functionalities.

  14. Gradient-Based Aerodynamic Shape Optimization Using ADI Method for Large-Scale Problems

    NASA Technical Reports Server (NTRS)

    Pandya, Mohagna J.; Baysal, Oktay

    1997-01-01

    A gradient-based shape optimization methodology, that is intended for practical three-dimensional aerodynamic applications, has been developed. It is based on the quasi-analytical sensitivities. The flow analysis is rendered by a fully implicit, finite volume formulation of the Euler equations.The aerodynamic sensitivity equation is solved using the alternating-direction-implicit (ADI) algorithm for memory efficiency. A flexible wing geometry model, that is based on surface parameterization and platform schedules, is utilized. The present methodology and its components have been tested via several comparisons. Initially, the flow analysis for for a wing is compared with those obtained using an unfactored, preconditioned conjugate gradient approach (PCG), and an extensively validated CFD code. Then, the sensitivities computed with the present method have been compared with those obtained using the finite-difference and the PCG approaches. Effects of grid refinement and convergence tolerance on the analysis and shape optimization have been explored. Finally the new procedure has been demonstrated in the design of a cranked arrow wing at Mach 2.4. Despite the expected increase in the computational time, the results indicate that shape optimization, which require large numbers of grid points can be resolved with a gradient-based approach.

  15. 75 FR 50884 - 2-(2'-hydroxy-3', 5'-di-tert-amylphenyl) benzotriazole and Phenol, 2-(2H-benzotriazole-2-yl)-6...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-18

    ... 2-(2'-hydroxy-5'-methylphenyl) benzotriazole in guinea pigs showed skin sensitization; however... pesticide manufacturer. Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  16. 76 FR 6342 - n-Octyl Alcohol and n-Decyl Alcohol; Exemption From the Requirement of a Tolerance

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-02-04

    ... produce sensitization in guinea pigs. A 90-day dermal toxicity study in rats with fatty alcohol blend (56... pesticide manufacturer. Potentially affected entities may include, but are not limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide...

  17. Bragg x-ray survey spectrometer for ITER.

    PubMed

    Varshney, S K; Barnsley, R; O'Mullane, M G; Jakhar, S

    2012-10-01

    Several potential impurity ions in the ITER plasmas will lead to loss of confined energy through line and continuum emission. For real time monitoring of impurities, a seven channel Bragg x-ray spectrometer (XRCS survey) is considered. This paper presents design and analysis of the spectrometer, including x-ray tracing by the Shadow-XOP code, sensitivity calculations for reference H-mode plasma and neutronics assessment. The XRCS survey performance analysis shows that the ITER measurement requirements of impurity monitoring in 10 ms integration time at the minimum levels for low-Z to high-Z impurity ions can largely be met.

  18. SPS market analysis. [small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    Goff, H. C.

    1980-01-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  19. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  20. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, A. R.; Nagpal, V. K.; Chamis, Christos C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping are included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  1. Probabilistic analysis of bladed turbine disks and the effect of mistuning

    NASA Technical Reports Server (NTRS)

    Shah, Ashwin; Nagpal, V. K.; Chamis, C. C.

    1990-01-01

    Probabilistic assessment of the maximum blade response on a mistuned rotor disk is performed using the computer code NESSUS. The uncertainties in natural frequency, excitation frequency, amplitude of excitation and damping have been included to obtain the cumulative distribution function (CDF) of blade responses. Advanced mean value first order analysis is used to compute CDF. The sensitivities of different random variables are identified. Effect of the number of blades on a rotor on mistuning is evaluated. It is shown that the uncertainties associated with the forcing function parameters have significant effect on the response distribution of the bladed rotor.

  2. Improving Public Reporting and Data Validation for Complex Surgical Site Infections After Coronary Artery Bypass Graft Surgery and Hip Arthroplasty

    PubMed Central

    Calderwood, Michael S.; Kleinman, Ken; Murphy, Michael V.; Platt, Richard; Huang, Susan S.

    2014-01-01

    Background  Deep and organ/space surgical site infections (D/OS SSI) cause significant morbidity, mortality, and costs. Rates are publicly reported and increasingly used as quality metrics affecting hospital payment. Lack of standardized surveillance methods threaten the accuracy of reported data and decrease confidence in comparisons based upon these data. Methods  We analyzed data from national validation studies that used Medicare claims to trigger chart review for SSI confirmation after coronary artery bypass graft surgery (CABG) and hip arthroplasty. We evaluated code performance (sensitivity and positive predictive value) to select diagnosis codes that best identified D/OS SSI. Codes were analyzed individually and in combination. Results  Analysis included 143 patients with D/OS SSI after CABG and 175 patients with D/OS SSI after hip arthroplasty. For CABG, 9 International Classification of Diseases, 9th Revision (ICD-9) diagnosis codes identified 92% of D/OS SSI, with 1 D/OS SSI identified for every 4 cases with a diagnosis code. For hip arthroplasty, 6 ICD-9 diagnosis codes identified 99% of D/OS SSI, with 1 D/OS SSI identified for every 2 cases with a diagnosis code. Conclusions  This standardized and efficient approach for identifying D/OS SSI can be used by hospitals to improve case detection and public reporting. This method can also be used to identify potential D/OS SSI cases for review during hospital audits for data validation. PMID:25734174

  3. Euler technology assessment for preliminary aircraft design employing OVERFLOW code with multiblock structured-grid method

    NASA Technical Reports Server (NTRS)

    Treiber, David A.; Muilenburg, Dennis A.

    1995-01-01

    The viability of applying a state-of-the-art Euler code to calculate the aerodynamic forces and moments through maximum lift coefficient for a generic sharp-edge configuration is assessed. The OVERFLOW code, a method employing overset (Chimera) grids, was used to conduct mesh refinement studies, a wind-tunnel wall sensitivity study, and a 22-run computational matrix of flow conditions, including sideslip runs and geometry variations. The subject configuration was a generic wing-body-tail geometry with chined forebody, swept wing leading-edge, and deflected part-span leading-edge flap. The analysis showed that the Euler method is adequate for capturing some of the non-linear aerodynamic effects resulting from leading-edge and forebody vortices produced at high angle-of-attack through C(sub Lmax). Computed forces and moments, as well as surface pressures, match well enough useful preliminary design information to be extracted. Vortex burst effects and vortex interactions with the configuration are also investigated.

  4. Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments.

    PubMed

    Jones, B E; South, B R; Shao, Y; Lu, C C; Leng, J; Sauer, B C; Gundlapalli, A V; Samore, M H; Zeng, Q

    2018-01-01

    Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). System-wide application of NLP to clinical text can increase capture of initial diagnostic hypotheses, an important inclusion when studying diagnosis and clinical decision-making under uncertainty. Schattauer GmbH Stuttgart.

  5. High-resolution imaging gamma-ray spectroscopy with externally segmented germanium detectors

    NASA Technical Reports Server (NTRS)

    Callas, J. L.; Mahoney, W. A.; Varnell, L. S.; Wheaton, W. A.

    1993-01-01

    Externally segmented germanium detectors promise a breakthrough in gamma-ray imaging capabilities while retaining the superb energy resolution of germanium spectrometers. An angular resolution of 0.2 deg becomes practical by combining position-sensitive germanium detectors having a segment thickness of a few millimeters with a one-dimensional coded aperture located about a meter from the detectors. Correspondingly higher angular resolutions are possible with larger separations between the detectors and the coded aperture. Two-dimensional images can be obtained by rotating the instrument. Although the basic concept is similar to optical or X-ray coded-aperture imaging techniques, several complicating effects arise because of the penetrating nature of gamma rays. The complications include partial transmission through the coded aperture elements, Compton scattering in the germanium detectors, and high background count rates. Extensive electron-photon Monte Carlo modeling of a realistic detector/coded-aperture/collimator system has been performed. Results show that these complicating effects can be characterized and accounted for with no significant loss in instrument sensitivity.

  6. Sensory Afferents Use Different Coding Strategies for Heat and Cold.

    PubMed

    Wang, Feng; Bélanger, Erik; Côté, Sylvain L; Desrosiers, Patrick; Prescott, Steven A; Côté, Daniel C; De Koninck, Yves

    2018-05-15

    Primary afferents transduce environmental stimuli into electrical activity that is transmitted centrally to be decoded into corresponding sensations. However, it remains unknown how afferent populations encode different somatosensory inputs. To address this, we performed two-photon Ca 2+ imaging from thousands of dorsal root ganglion (DRG) neurons in anesthetized mice while applying mechanical and thermal stimuli to hind paws. We found that approximately half of all neurons are polymodal and that heat and cold are encoded very differently. As temperature increases, more heating-sensitive neurons are activated, and most individual neurons respond more strongly, consistent with graded coding at population and single-neuron levels, respectively. In contrast, most cooling-sensitive neurons respond in an ungraded fashion, inconsistent with graded coding and suggesting combinatorial coding, based on which neurons are co-activated. Although individual neurons may respond to multiple stimuli, our results show that different stimuli activate distinct combinations of diversely tuned neurons, enabling rich population-level coding. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  7. Use of a Respondent-Generated Personal Code for Matching Anonymous Adolescent Surveys in Longitudinal Studies.

    PubMed

    Ripper, Lisa; Ciaravino, Samantha; Jones, Kelley; Jaime, Maria Catrina D; Miller, Elizabeth

    2017-06-01

    Research on sensitive and private topics relies heavily on self-reported responses. Social desirability bias may reduce the accuracy and reliability of self-reported responses. Anonymous surveys appear to improve the likelihood of honest responses. A challenge with prospective research is maintaining anonymity while linking individual surveys over time. We have tested a secret code method in which participants create their own code based on eight questions that are not expected to change. In an ongoing middle school trial, 95.7% of follow-up surveys are matched to a baseline survey after changing up to two-code variables. The percentage matched improves by allowing up to four changes (99.7%). The use of a secret code as an anonymous identifier for linking baseline and follow-up surveys is feasible for use with adolescents. While developed for violence prevention research, this method may be useful with other sensitive health behavior research. Copyright © 2017 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.

  8. Correlation of SA349/2 helicopter flight-test data with a comprehensive rotorcraft model

    NASA Technical Reports Server (NTRS)

    Yamauchi, Gloria K.; Heffernan, Ruth M.; Gaubert, Michel

    1986-01-01

    A comprehensive rotorcraft analysis model was used to predict blade aerodynamic and structural loads for comparison with flight test data. The data were obtained from an SA349/2 helicopter with an advanced geometry rotor. Sensitivity of the correlation to wake geometry, blade dynamics, and blade aerodynamic effects was investigated. Blade chordwise pressure coefficients were predicted for the blade transonic regimes using the model coupled with two finite-difference codes.

  9. Summary of comparison and analysis of results from exercises 1 and 2 of the OECD PBMR coupled neutronics/thermal hydraulics transient benchmark

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkhabela, P.; Han, J.; Tyobeka, B.

    2006-07-01

    The Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) has accepted, through the Nuclear Science Committee (NSC), the inclusion of the Pebble-Bed Modular Reactor 400 MW design (PBMR-400) coupled neutronics/thermal hydraulics transient benchmark problem as part of their official activities. The scope of the benchmark is to establish a well-defined problem, based on a common given library of cross sections, to compare methods and tools in core simulation and thermal hydraulics analysis with a specific focus on transient events through a set of multi-dimensional computational test problems. The benchmark includes three steady state exercises andmore » six transient exercises. This paper describes the first two steady state exercises, their objectives and the international participation in terms of organization, country and computer code utilized. This description is followed by a comparison and analysis of the participants' results submitted for these two exercises. The comparison of results from different codes allows for an assessment of the sensitivity of a result to the method employed and can thus help to focus the development efforts on the most critical areas. The two first exercises also allow for removing of user-related modeling errors and prepare core neutronics and thermal-hydraulics models of the different codes for the rest of the exercises in the benchmark. (authors)« less

  10. Space-time adaptive solution of inverse problems with the discrete adjoint method

    NASA Astrophysics Data System (ADS)

    Alexe, Mihai; Sandu, Adrian

    2014-08-01

    This paper develops a framework for the construction and analysis of discrete adjoint sensitivities in the context of time dependent, adaptive grid, adaptive step models. Discrete adjoints are attractive in practice since they can be generated with low effort using automatic differentiation. However, this approach brings several important challenges. The space-time adjoint of the forward numerical scheme may be inconsistent with the continuous adjoint equations. A reduction in accuracy of the discrete adjoint sensitivities may appear due to the inter-grid transfer operators. Moreover, the optimization algorithm may need to accommodate state and gradient vectors whose dimensions change between iterations. This work shows that several of these potential issues can be avoided through a multi-level optimization strategy using discontinuous Galerkin (DG) hp-adaptive discretizations paired with Runge-Kutta (RK) time integration. We extend the concept of dual (adjoint) consistency to space-time RK-DG discretizations, which are then shown to be well suited for the adaptive solution of time-dependent inverse problems. Furthermore, we prove that DG mesh transfer operators on general meshes are also dual consistent. This allows the simultaneous derivation of the discrete adjoint for both the numerical solver and the mesh transfer logic with an automatic code generation mechanism such as algorithmic differentiation (AD), potentially speeding up development of large-scale simulation codes. The theoretical analysis is supported by numerical results reported for a two-dimensional non-stationary inverse problem.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmony, S.C.; Steiner, J.L.; Stumpf, H.J.

    The PIUS advanced reactor is a 640-MWe pressurized water reactor developed by Asea Brown Boveri (ABB). A unique feature of the PIUS concept is the absence of mechanical control and shutdown rods. Reactivity is controlled by coolant boron concentration and the temperature of the moderator coolant. As part of the preapplication and eventual design certification process, advanced reactor applicants are required to submit neutronic and thermal-hydraulic safety analyses over a sufficient range of normal operation, transient conditions, and specified accident sequences. Los Alamos is supporting the US Nuclear Regulatory Commission`s preapplication review of the PIUS reactor. A fully one-dimensional modelmore » of the PIUS reactor has been developed for the Transient Reactor Analysis Code, TRACPF1/MOD2. Early in 1992, ABB submitted a Supplemental Information Package describing recent design modifications. An important feature of the PIUS Supplement design was the addition of an active scram system that will function for most transient and accident conditions. A one-dimensional Transient Reactor Analysis Code baseline calculation of the PIUS Supplement design were performed for a break in the main steam line at the outlet nozzle of the loop 3 steam generator. Sensitivity studies were performed to explore the robustness of the PIUS concept to severe off-normal conditions following a main steam line break. The sensitivity study results provide insights into the robustness of the design.« less

  12. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  13. Staying theoretically sensitive when conducting grounded theory research.

    PubMed

    Reay, Gudrun; Bouchal, Shelley Raffin; A Rankin, James

    2016-09-01

    Background Grounded theory (GT) is founded on the premise that underlying social patterns can be discovered and conceptualised into theories. The method and need for theoretical sensitivity are best understood in the historical context in which GT was developed. Theoretical sensitivity entails entering the field with no preconceptions, so as to remain open to the data and the emerging theory. Investigators also read literature from other fields to understand various ways to construct theories. Aim To explore the concept of theoretical sensitivity from a classical GT perspective, and discuss the ontological and epistemological foundations of GT. Discussion Difficulties in remaining theoretically sensitive throughout research are discussed and illustrated with examples. Emergence - the idea that theory and substance will emerge from the process of comparing data - and staying open to the data are emphasised. Conclusion Understanding theoretical sensitivity as an underlying guiding principle of GT helps the researcher make sense of important concepts, such as delaying the literature review, emergence and the constant comparative method (simultaneous collection, coding and analysis of data). Implications for practice Theoretical sensitivity and adherence to the GT research method allow researchers to discover theories that can bridge the gap between theory and practice.

  14. Hard X-ray imaging from Explorer

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Murray, S. S.

    1981-01-01

    Coded aperture X-ray detectors were applied to obtain large increases in sensitivity as well as angular resolution. A hard X-ray coded aperture detector concept is described which enables very high sensitivity studies persistent hard X-ray sources and gamma ray bursts. Coded aperture imaging is employed so that approx. 2 min source locations can be derived within a 3 deg field of view. Gamma bursts were located initially to within approx. 2 deg and X-ray/hard X-ray spectra and timing, as well as precise locations, derived for possible burst afterglow emission. It is suggested that hard X-ray imaging should be conducted from an Explorer mission where long exposure times are possible.

  15. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  16. Genome-wide identification of wheat (Triticum aestivum) expansins and expansin expression analysis in cold-tolerant and cold-sensitive wheat cultivars

    PubMed Central

    Zhang, Jun-Feng; Xu, Yong-Qing; Dong, Jia-Min; Peng, Li-Na; Feng, Xu; Wang, Xu; Li, Fei; Miao, Yu; Yao, Shu-Kuan; Zhao, Qiao-Qin; Feng, Shan-Shan; Hu, Bao-Zhong

    2018-01-01

    Plant expansins are proteins involved in cell wall loosening, plant growth, and development, as well as in response to plant diseases and other stresses. In this study, we identified 128 expansin coding sequences from the wheat (Triticum aestivum) genome. These sequences belong to 45 homoeologous copies of TaEXPs, including 26 TaEXPAs, 15 TaEXPBs and four TaEXLAs. No TaEXLB was identified. Gene expression and sub-expression profiles revealed that most of the TaEXPs were expressed either only in root tissues or in multiple organs. Real-time qPCR analysis showed that many TaEXPs were differentially expressed in four different tissues of the two wheat cultivars—the cold-sensitive ‘Chinese Spring (CS)’ and the cold-tolerant ‘Dongnongdongmai 1 (D1)’ cultivars. Our results suggest that the differential expression of TaEXPs could be related to low-temperature tolerance or sensitivity of different wheat cultivars. Our study expands our knowledge on wheat expansins and sheds new light on the functions of expansins in plant development and stress response. PMID:29596529

  17. Elastic critical moment for bisymmetric steel profiles and its sensitivity by the finite difference method

    NASA Astrophysics Data System (ADS)

    Kamiński, M.; Supeł, Ł.

    2016-02-01

    It is widely known that lateral-torsional buckling of a member under bending and warping restraints of its cross-sections in the steel structures are crucial for estimation of their safety and durability. Although engineering codes for steel and aluminum structures support the designer with the additional analytical expressions depending even on the boundary conditions and internal forces diagrams, one may apply alternatively the traditional Finite Element or Finite Difference Methods (FEM, FDM) to determine the so-called critical moment representing this phenomenon. The principal purpose of this work is to compare three different ways of determination of critical moment, also in the context of structural sensitivity analysis with respect to the structural element length. Sensitivity gradients are determined by the use of both analytical and the central finite difference scheme here and contrasted also for analytical, FEM as well as FDM approaches. Computational study is provided for the entire family of the steel I- and H - beams available for the practitioners in this area, and is a basis for further stochastic reliability analysis as well as durability prediction including possible corrosion progress.

  18. Proper coding of the Abbreviated Injury Scale: can clinical parameters help as surrogates in estimating blood loss?

    PubMed

    Burkhardt, M; Holstein, J H; Moersdorf, P; Kristen, A; Lefering, R; Pohlemann, T; Pizanis, A

    2014-08-01

    The Abbreviated Injury Scale (AIS) requires the estimation of the lost blood volume for some severity assignments. This study aimed to develop a rule of thumb for facilitating AIS coding by using objective clinical parameters as surrogate markers of blood loss. Using the example of pelvic ring fractures, a retrospective analysis of TraumaRegister DGU(®) data from 2002 to 2011 was performed. As potential surrogate markers of blood loss, we recorded the hemoglobin (Hb) level, systolic blood pressure (SBP), base excess (BE), Quick's value, units of packed red blood cells (PRBCs) transfused before intensive care unit (ICU) admission, and mortality within 24 h. We identified 11,574 patients with pelvic ring fractures (Tile/OTA classification: 39 % type A, 40 % type B, 21 % type C). Type C fractures were 73.1 % AISpelvis 4 and 26.9 % AISpelvis 5. Type B fractures were 47 % AISpelvis 3, 47 % AISpelvis 4, and 6 % AISpelvis 5. In type C fractures, cut-off values of <7 g/dL Hb, <90 mmHg SBP, <-9 mmol/L BE, <35 % Quick's value, >15 units PRBCs, and death within 24 h had a positive predictive value of 47 % and a sensitivity of 62 % for AISpelvis 5. In type B fractures, these cut-off values had poor sensitivity (48 %) and positive predictive value (11 %) for AISpelvis 5. We failed to develop a rule of thumb for facilitating a proper future AIS coding using the example of pelvic ring fractures. The estimation of blood loss for severity assignment still remains a noteworthy weakness in the AIS coding of traumatic injuries.

  19. Design of the superconducting magnet for 9.4 Tesla whole-body magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Li, Y.; Wang, Q.; Dai, Y.; Ni, Z.; Zhu, X.; Li, L.; Zhao, B.; Chen, S.

    2017-02-01

    A superconducting magnet for 9.4 Tesla whole-body magnetic resonance imaging is designed and fabricated in Institute of Electrical Engineering, Chinese Academy of Sciences. In this paper, the electromagnetic design methods of the main coils and compensating coils are presented. Sensitivity analysis is performed for all superconducting coils. The design of the superconducting shimming coils is also presented and the design of electromagnetic decoupling of the Z2 coils from the main coils is introduced. Stress and strain analysis with both averaged and detailed models is performed with finite element method. A quench simulation code with anisotropic continuum model and control volume method is developed by us and is verified by experimental study. By means of the quench simulation code, the quench protection system for the 9.4 T magnet is designed for the main coils, the compensating coils and the shimming coils. The magnet cryostat design with zero helium boiling-off technology is also introduced.

  20. Indications for spine surgery: validation of an administrative coding algorithm to classify degenerative diagnoses

    PubMed Central

    Lurie, Jon D.; Tosteson, Anna N.A.; Deyo, Richard A.; Tosteson, Tor; Weinstein, James; Mirza, Sohail K.

    2014-01-01

    Study Design Retrospective analysis of Medicare claims linked to a multi-center clinical trial. Objective The Spine Patient Outcomes Research Trial (SPORT) provided a unique opportunity to examine the validity of a claims-based algorithm for grouping patients by surgical indication. SPORT enrolled patients for lumbar disc herniation, spinal stenosis, and degenerative spondylolisthesis. We compared the surgical indication derived from Medicare claims to that provided by SPORT surgeons, the “gold standard”. Summary of Background Data Administrative data are frequently used to report procedure rates, surgical safety outcomes, and costs in the management of spinal surgery. However, the accuracy of using diagnosis codes to classify patients by surgical indication has not been examined. Methods Medicare claims were link to beneficiaries enrolled in SPORT. The sensitivity and specificity of three claims-based approaches to group patients based on surgical indications were examined: 1) using the first listed diagnosis; 2) using all diagnoses independently; and 3) using a diagnosis hierarchy based on the support for fusion surgery. Results Medicare claims were obtained from 376 SPORT participants, including 21 with disc herniation, 183 with spinal stenosis, and 172 with degenerative spondylolisthesis. The hierarchical coding algorithm was the most accurate approach for classifying patients by surgical indication, with sensitivities of 76.2%, 88.1%, and 84.3% for disc herniation, spinal stenosis, and degenerative spondylolisthesis cohorts, respectively. The specificity was 98.3% for disc herniation, 83.2% for spinal stenosis, and 90.7% for degenerative spondylolisthesis. Misclassifications were primarily due to codes attributing more complex pathology to the case. Conclusion Standardized approaches for using claims data to accurately group patients by surgical indications has widespread interest. We found that a hierarchical coding approach correctly classified over 90% of spine patients into their respective SPORT cohorts. Therefore, claims data appears to be a reasonably valid approach to classifying patients by surgical indication. PMID:24525995

  1. Quality of data regarding diagnoses of spinal disorders in administrative databases. A multicenter study.

    PubMed

    Faciszewski, T; Broste, S K; Fardon, D

    1997-10-01

    The purpose of the present study was to evaluate the accuracy of data regarding diagnoses of spinal disorders in administrative databases at eight different institutions. The records of 189 patients who had been managed for a disorder of the lumbar spine were independently reviewed by a physician who assigned the appropriate diagnostic codes according to the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM). The age range of the 189 patients was seventeen to eighty-four years. The six major diagnostic categories studied were herniation of a lumbar disc, a previous operation on the lumbar spine, spinal stenosis, cauda equina syndrome, acquired spondylolisthesis, and congenital spondylolisthesis. The diagnostic codes assigned by the physician were compared with the codes that had been assigned during the ordinary course of events by personnel in the medical records department of each of the eight hospitals. The accuracy of coding was also compared among the eight hospitals, and it was found to vary depending on the diagnosis. Although there were both false-negative and false-positive codes at each institution, most errors were related to the low sensitivity of coding for previous spinal operations: only seventeen (28 per cent) of sixty-one such diagnoses were coded correctly. Other errors in coding were less frequent, but their implications for conclusions drawn from the information in administrative databases depend on the frequency of a diagnosis and its importance in an analysis. This study demonstrated that the accuracy of a diagnosis of a spinal disorder recorded in an administrative database varies according to the specific condition being evaluated. It is necessary to document the relative accuracy of specific ICD-9-CM diagnostic codes in order to improve the ability to validate the conclusions derived from investigations based on administrative databases.

  2. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  3. Validation of an International Classification of Diseases, Ninth Revision Code Algorithm for Identifying Chiari Malformation Type 1 Surgery in Adults.

    PubMed

    Greenberg, Jacob K; Ladner, Travis R; Olsen, Margaret A; Shannon, Chevis N; Liu, Jingxia; Yarbrough, Chester K; Piccirillo, Jay F; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2015-08-01

    The use of administrative billing data may enable large-scale assessments of treatment outcomes for Chiari Malformation type I (CM-1). However, to utilize such data sets, validated International Classification of Diseases, Ninth Revision (ICD-9-CM) code algorithms for identifying CM-1 surgery are needed. To validate 2 ICD-9-CM code algorithms identifying patients undergoing CM-1 decompression surgery. We retrospectively analyzed the validity of 2 ICD-9-CM code algorithms for identifying adult CM-1 decompression surgery performed at 2 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-1), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression, or laminectomy). Algorithm 2 restricted this group to patients with a primary diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. Among 340 first-time admissions identified by Algorithm 1, the overall PPV for CM-1 decompression was 65%. Among the 214 admissions identified by Algorithm 2, the overall PPV was 99.5%. The PPV for Algorithm 1 was lower in the Vanderbilt (59%) cohort, males (40%), and patients treated between 2009 and 2013 (57%), whereas the PPV of Algorithm 2 remained high (≥99%) across subgroups. The sensitivity of Algorithms 1 (86%) and 2 (83%) were above 75% in all subgroups. ICD-9-CM code Algorithm 2 has excellent PPV and good sensitivity to identify adult CM-1 decompression surgery. These results lay the foundation for studying CM-1 treatment outcomes by using large administrative databases.

  4. Assessing the Role of Place and Timing Cues in Coding Frequency and Amplitude Modulation as a Function of Age.

    PubMed

    Whiteford, Kelly L; Kreft, Heather A; Oxenham, Andrew J

    2017-08-01

    Natural sounds can be characterized by their fluctuations in amplitude and frequency. Ageing may affect sensitivity to some forms of fluctuations more than others. The present study used individual differences across a wide age range (20-79 years) to test the hypothesis that slow-rate, low-carrier frequency modulation (FM) is coded by phase-locked auditory-nerve responses to temporal fine structure (TFS), whereas fast-rate FM is coded via rate-place (tonotopic) cues, based on amplitude modulation (AM) of the temporal envelope after cochlear filtering. Using a low (500 Hz) carrier frequency, diotic FM and AM detection thresholds were measured at slow (1 Hz) and fast (20 Hz) rates in 85 listeners. Frequency selectivity and TFS coding were assessed using forward masking patterns and interaural phase disparity tasks (slow dichotic FM), respectively. Comparable interaural level disparity tasks (slow and fast dichotic AM and fast dichotic FM) were measured to control for effects of binaural processing not specifically related to TFS coding. Thresholds in FM and AM tasks were correlated, even across tasks thought to use separate peripheral codes. Age was correlated with slow and fast FM thresholds in both diotic and dichotic conditions. The relationship between age and AM thresholds was generally not significant. Once accounting for AM sensitivity, only diotic slow-rate FM thresholds remained significantly correlated with age. Overall, results indicate stronger effects of age on FM than AM. However, because of similar effects for both slow and fast FM when not accounting for AM sensitivity, the effects cannot be unambiguously ascribed to TFS coding.

  5. Actinic Flux Calculations: A Model Sensitivity Study

    NASA Technical Reports Server (NTRS)

    Krotkov, Nickolay A.; Flittner, D.; Ahmad, Z.; Herman, J. R.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    calculate direct and diffuse surface irradiance and actinic flux (downwelling (2p) and total (4p)) for the reference model. Sensitivity analysis has shown that the accuracy of the radiative transfer flux calculations for a unit ETS (i.e. atmospheric transmittance) together with a numerical interpolation technique for the constituents' vertical profiles is better than 1% for SZA less than 70(sub o) and wavelengths longer than 310 nm. The differences increase for shorter wavelengths and larger SZA, due to the differences in pseudo-spherical correction techniques and vertical discretetization among the codes. Our sensitivity study includes variation of ozone cross-sections, ETS spectra and the effects of wavelength shifts between vacuum and air scales. We also investigate the effects of aerosols on the spectral flux components in the UV and visible spectral regions. The "aerosol correction factors" (ACFs) were calculated at discrete wavelengths and different SZAs for each flux component (direct, diffuse, reflected) and prescribed IPMMI aerosol parameters. Finally, the sensitivity study was extended to calculation of selected photolysis rates coefficients.

  6. Development of probabilistic internal dosimetry computer code

    NASA Astrophysics Data System (ADS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-02-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.

  7. The Generation of Field Sensitive Interface States in Commercial CMOS Devices.

    DTIC Science & Technology

    1984-05-31

    R. Hevey ATTN: STEWS -TE-AN, A. De La Paz ATTN: Code 6816, R. Lambert ATTN: STEWS -TE-AN, J. Meason ATTN: STEWS -TE-AN, R. Dutchover Naval Surface...Weapons Center ATTN: STEWS -TE-AN, R. Hays ATTN: Code F30 ATTN: STEWS -TE-N, K. Cummings ATTN: Code F31 ATTN: STEWS -TE-N, T. Arellanes ATTN: Code F31, F...Warnock ATTN: STEWS -TE-NT, M. Squires ATTN: Code F31, K. Caudle ATTN: Code WA-52, R. Smith USA Missile Command ATTN: F31, J. Downs ATTN: AMSMI-SF, G

  8. Parameter estimation and sensitivity analysis in an agent-based model of Leishmania major infection

    PubMed Central

    Jones, Douglas E.; Dorman, Karin S.

    2009-01-01

    Computer models of disease take a systems biology approach toward understanding host-pathogen interactions. In particular, data driven computer model calibration is the basis for inference of immunological and pathogen parameters, assessment of model validity, and comparison between alternative models of immune or pathogen behavior. In this paper we describe the calibration and analysis of an agent-based model of Leishmania major infection. A model of macrophage loss following uptake of necrotic tissue is proposed to explain macrophage depletion following peak infection. Using Gaussian processes to approximate the computer code, we perform a sensitivity analysis to identify important parameters and to characterize their influence on the simulated infection. The analysis indicates that increasing growth rate can favor or suppress pathogen loads, depending on the infection stage and the pathogen’s ability to avoid detection. Subsequent calibration of the model against previously published biological observations suggests that L. major has a relatively slow growth rate and can replicate for an extended period of time before damaging the host cell. PMID:19837088

  9. Case studies in Bayesian microbial risk assessments.

    PubMed

    Kennedy, Marc C; Clough, Helen E; Turner, Joanne

    2009-12-21

    The quantification of uncertainty and variability is a key component of quantitative risk analysis. Recent advances in Bayesian statistics make it ideal for integrating multiple sources of information, of different types and quality, and providing a realistic estimate of the combined uncertainty in the final risk estimates. We present two case studies related to foodborne microbial risks. In the first, we combine models to describe the sequence of events resulting in illness from consumption of milk contaminated with VTEC O157. We used Monte Carlo simulation to propagate uncertainty in some of the inputs to computer models describing the farm and pasteurisation process. Resulting simulated contamination levels were then assigned to consumption events from a dietary survey. Finally we accounted for uncertainty in the dose-response relationship and uncertainty due to limited incidence data to derive uncertainty about yearly incidences of illness in young children. Options for altering the risk were considered by running the model with different hypothetical policy-driven exposure scenarios. In the second case study we illustrate an efficient Bayesian sensitivity analysis for identifying the most important parameters of a complex computer code that simulated VTEC O157 prevalence within a managed dairy herd. This was carried out in 2 stages, first to screen out the unimportant inputs, then to perform a more detailed analysis on the remaining inputs. The method works by building a Bayesian statistical approximation to the computer code using a number of known code input/output pairs (training runs). We estimated that the expected total number of children aged 1.5-4.5 who become ill due to VTEC O157 in milk is 8.6 per year, with 95% uncertainty interval (0,11.5). The most extreme policy we considered was banning on-farm pasteurisation of milk, which reduced the estimate to 6.4 with 95% interval (0,11). In the second case study the effective number of inputs was reduced from 30 to 7 in the screening stage, and just 2 inputs were found to explain 82.8% of the output variance. A combined total of 500 runs of the computer code were used. These case studies illustrate the use of Bayesian statistics to perform detailed uncertainty and sensitivity analyses, integrating multiple information sources in a way that is both rigorous and efficient.

  10. An RNA tool kit to study the status of mouse ES cells: sex determination and stemness.

    PubMed

    Jay, F; Ciaudo, C

    2013-09-01

    Mouse embryonic stem cells (mESCs) are pluripotent stem cells derived from the inner cell mass of the blastocyst. They can be maintained under controlled culture conditions in a pluripotent state, or be induced to differentiate into all derivatives of the three primary germ layers: ectoderm, endoderm and mesoderm. Several studies have characterised the coding and non-coding (nc) RNA repertoires of mESCs, uncovering highly dynamic variations during the process of differentiation, but also qualitative differences pertaining to sex. For example, up-regulation of the long non-coding RNA Xist on the X chromosome induces gene silencing and X inactivation exclusively during female mESC differentiation. In contrast, specific small RNAs have been shown to be up-regulated during male mESC differentiation. Here, we illustrate how a small set of key coding and ncRNAs can be exploited as dynamic and sensitive markers of the stemness and/or the differentiation status of male or female mESC lines. We describe adapted techniques for the extended characterization and analysis of mESCs from as little material as that cultured in a single 75cm(2) flask. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Simulated Raman Spectral Analysis of Organic Molecules

    NASA Astrophysics Data System (ADS)

    Lu, Lu

    The advent of the laser technology in the 1960s solved the main difficulty of Raman spectroscopy, resulted in simplified Raman spectroscopy instruments and also boosted the sensitivity of the technique. Up till now, Raman spectroscopy is commonly used in chemistry and biology. As vibrational information is specific to the chemical bonds, Raman spectroscopy provides fingerprints to identify the type of molecules in the sample. In this thesis, we simulate the Raman Spectrum of organic and inorganic materials by General Atomic and Molecular Electronic Structure System (GAMESS) and Gaussian, two computational codes that perform several general chemistry calculations. We run these codes on our CPU-based high-performance cluster (HPC). Through the message passing interface (MPI), a standardized and portable message-passing system which can make the codes run in parallel, we are able to decrease the amount of time for computation and increase the sizes and capacities of systems simulated by the codes. From our simulations, we will set up a database that allows search algorithm to quickly identify N-H and O-H bonds in different materials. Our ultimate goal is to analyze and identify the spectra of organic matter compositions from meteorites and compared these spectra with terrestrial biologically-produced amino acids and residues.

  12. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  13. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  14. The effect of alternative seismotectonic models on PSHA results - a sensitivity study for two sites in Israel

    NASA Astrophysics Data System (ADS)

    Avital, Matan; Kamai, Ronnie; Davis, Michael; Dor, Ory

    2018-02-01

    We present a full probabilistic seismic hazard analysis (PSHA) sensitivity analysis for two sites in southern Israel - one in the near field of a major fault system and one farther away. The PSHA analysis is conducted for alternative source representations, using alternative model parameters for the main seismic sources, such as slip rate and Mmax, among others. The analysis also considers the effect of the ground motion prediction equation (GMPE) on the hazard results. In this way, the two types of epistemic uncertainty - modelling uncertainty and parametric uncertainty - are treated and addressed. We quantify the uncertainty propagation by testing its influence on the final calculated hazard, such that the controlling knowledge gaps are identified and can be treated in future studies. We find that current practice in Israel, as represented by the current version of the building code, grossly underestimates the hazard, by approximately 40 % in short return periods (e.g. 10 % in 50 years) and by as much as 150 % in long return periods (e.g. 10E-5). The analysis shows that this underestimation is most probably due to a combination of factors, including source definitions as well as the GMPE used for analysis.

  15. Validation of Intensive Care and Mechanical Ventilation Codes in Medicare Data.

    PubMed

    Wunsch, Hannah; Kramer, Andrew; Gershengorn, Hayley B

    2017-07-01

    To assess the reliability of codes relevant to critically ill patients in administrative data. Retrospective cohort study linking data from Acute Physiology and Chronic Health Evaluation Outcomes, a clinical database of ICU patients with data from Medicare Provider Analysis and Review. We linked data based on matching for sex, date of birth, hospital, and date of admission to hospital. Forty-six hospitals in the United States participating in Acute Physiology and Chronic Health Evaluation Outcomes. All patients in Acute Physiology and Chronic Health Evaluation Outcomes greater than or equal to 65 years old who could be linked with hospitalization records in Medicare Provider Analysis and Review from January 1, 2009, through September 30, 2012. Of 62,451 patients in the Acute Physiology and Chronic Health Evaluation Outcomes dataset, 80.1% were matched with data in Medicare Provider Analysis and Review. All but 2.7% of Acute Physiology and Chronic Health Evaluation Outcomes ICU patients had either an ICU or coronary care unit charge in Medicare Provider Analysis and Review. In Acute Physiology and Chronic Health Evaluation Outcomes, 37.0% received mechanical ventilation during the ICU stay versus 24.1% in Medicare Provider Analysis and Review. The Medicare Provider Analysis and Review procedure codes for mechanical ventilation had high specificity (96.0%; 95% CI, 95.8-96.2), but only moderate sensitivity (58.4%; 95% CI, 57.7-59.1), with a positive predictive value of 89.6% (95% CI, 89.1-90.1) and negative predictive value of 79.7% (95% CI, 79.4-80.1). For patients with mechanical ventilation codes, Medicare Provider Analysis and Review overestimated the percentage with a duration greater than 96 hours (36.6% vs 27.3% in Acute Physiology and Chronic Health Evaluation Outcomes). There was discordance in the hospital discharge status (alive or dead) for only 0.47% of all linked records (κ = 1.00). Medicare Provider Analysis and Review data contain robust information on hospital mortality for patients admitted to the ICU but have limited ability to identify all patients who received mechanical ventilation during a critical illness. Estimates of use of mechanical ventilation in the United States should likely be revised upward.

  16. Validity of the International Classification of Diseases, Tenth Revision code for acute kidney injury in elderly patients at presentation to the emergency department and at hospital admission

    PubMed Central

    Hwang, Y Joseph; Shariff, Salimah Z; Gandhi, Sonja; Wald, Ron; Clark, Edward; Fleet, Jamie L; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, Tenth Revision (ICD-10) code N17x for acute kidney injury (AKI) in elderly patients in two settings: at presentation to the emergency department and at hospital admission. Design A population-based retrospective validation study. Setting Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum creatinine measurements at presentation to the emergency department (n=36 049) or hospital admission (n=38 566). The baseline serum creatinine measurement was a median of 102 and 39 days prior to presentation to the emergency department and hospital admission, respectively. Main outcome measures Sensitivity, specificity and positive and negative predictive values of ICD-10 diagnostic coding algorithms for AKI using a reference standard based on changes in serum creatinine from the baseline value. Median changes in serum creatinine of patients who were code positive and code negative for AKI. Results The sensitivity of the best-performing coding algorithm for AKI (defined as a ≥2-fold increase in serum creatinine concentration) was 37.4% (95% CI 32.1% to 43.1%) at presentation to the emergency department and 61.6% (95% CI 57.5% to 65.5%) at hospital admission. The specificity was greater than 95% in both settings. In patients who were code positive for AKI, the median (IQR) increase in serum creatinine from the baseline was 133 (62 to 288) µmol/l at presentation to the emergency department and 98 (43 to 200) µmol/l at hospital admission. In those who were code negative, the increase in serum creatinine was 2 (−8 to 14) and 6 (−4 to 20) µmol/l, respectively. Conclusions The presence or absence of ICD-10 code N17× differentiates two groups of patients with distinct changes in serum creatinine at the time of a hospital encounter. However, the code underestimates the true incidence of AKI due to a limited sensitivity. PMID:23204077

  17. Validity of the International Classification of Diseases 10th revision code for hospitalisation with hyponatraemia in elderly patients

    PubMed Central

    Gandhi, Sonja; Shariff, Salimah Z; Fleet, Jamie L; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objective To evaluate the validity of the International Classification of Diseases, 10th Revision (ICD-10) diagnosis code for hyponatraemia (E87.1) in two settings: at presentation to the emergency department and at hospital admission. Design Population-based retrospective validation study. Setting Twelve hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Patients aged 66 years and older with serum sodium laboratory measurements at presentation to the emergency department (n=64 581) and at hospital admission (n=64 499). Main outcome measures Sensitivity, specificity, positive predictive value and negative predictive value comparing various ICD-10 diagnostic coding algorithms for hyponatraemia to serum sodium laboratory measurements (reference standard). Median serum sodium values comparing patients who were code positive and code negative for hyponatraemia. Results The sensitivity of hyponatraemia (defined by a serum sodium ≤132 mmol/l) for the best-performing ICD-10 coding algorithm was 7.5% at presentation to the emergency department (95% CI 7.0% to 8.2%) and 10.6% at hospital admission (95% CI 9.9% to 11.2%). Both specificities were greater than 99%. In the two settings, the positive predictive values were 96.4% (95% CI 94.6% to 97.6%) and 82.3% (95% CI 80.0% to 84.4%), while the negative predictive values were 89.2% (95% CI 89.0% to 89.5%) and 87.1% (95% CI 86.8% to 87.4%). In patients who were code positive for hyponatraemia, the median (IQR) serum sodium measurements were 123 (119–126) mmol/l and 125 (120–130) mmol/l in the two settings. In code negative patients, the measurements were 138 (136–140) mmol/l and 137 (135–139) mmol/l. Conclusions The ICD-10 diagnostic code for hyponatraemia differentiates between two groups of patients with distinct serum sodium measurements at both presentation to the emergency department and at hospital admission. However, these codes underestimate the true incidence of hyponatraemia due to low sensitivity. PMID:23274673

  18. Application of upconversion luminescent-magnetic microbeads with weak background noise and facile separation in ochratoxin A detection

    NASA Astrophysics Data System (ADS)

    Liao, Zhenyu; Zhang, Ying; Su, Lin; Chang, Jin; Wang, Hanjie

    2017-02-01

    Ochratoxin A (OTA), the most harmful and abundant ochratoxin, is chemically stable and commonly existed in foodstuffs. In this work, upconversion luminescent-magnetic microbeads (UCLMMs) -based cytometric bead array for OTA detection with a less reagent consumption and high sensitivity has been established and optimized. In UCLMMs, upconversion nanocrystals (UCNs) for optical code present a weak background noise and no spectral cross talk between the encoding signals and target labels under two excitation conditions to improve detection sensitivity. While the superparamagnetic Fe3O4 nanoparticles (Fe3O4 NPs) aim for rapid analysis. The results show that the developed method has a sensitivity of 9.553 ppt below HPLC with a 50-μL sample and can be completed in <2 h with good accuracy and high reproducibility. Therefore, different colors of UCLMMs will become a promising assay platform for multiple mycotoxins after further improvement.

  19. Validation of an International Statistical Classification of Diseases and Related Health Problems 10th Revision Coding Algorithm for Hospital Encounters with Hypoglycemia.

    PubMed

    Hodge, Meryl C; Dixon, Stephanie; Garg, Amit X; Clemens, Kristin K

    2017-06-01

    To determine the positive predictive value and sensitivity of an International Statistical Classification of Diseases and Related Health Problems, 10th Revision, coding algorithm for hospital encounters concerning hypoglycemia. We carried out 2 retrospective studies in Ontario, Canada. We examined medical records from 2002 through 2014, in which older adults (mean age, 76) were assigned at least 1 code for hypoglycemia (E15, E160, E161, E162, E1063, E1163, E1363, E1463). The positive predictive value of the algorithm was calculated using a gold-standard definition (blood glucose value <4 mmol/L or physician diagnosis of hypoglycemia). To determine the algorithm's sensitivity, we used linked healthcare databases to identify older adults (mean age, 77) with laboratory plasma glucose values <4 mmol/L during a hospital encounter that took place between 2003 and 2011. We assessed how frequently a code for hypoglycemia was present. We also examined the algorithm's performance in differing clinical settings (e.g. inpatient vs. emergency department, by hypoglycemia severity). The positive predictive value of the algorithm was 94.0% (95% confidence interval 89.3% to 97.0%), and its sensitivity was 12.7% (95% confidence interval 11.9% to 13.5%). It performed better in the emergency department and in cases of more severe hypoglycemia (plasma glucose values <3.5 mmol/L compared with ≥3.5 mmol/L). Our hypoglycemia algorithm has a high positive predictive value but is limited in sensitivity. Although we can be confident that older adults who are assigned 1 of these codes truly had a hypoglycemia event, many episodes will not be captured by studies using administrative databases. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.

  20. Parallel-vector computation for structural analysis and nonlinear unconstrained optimization problems

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.

    1990-01-01

    Practical engineering application can often be formulated in the form of a constrained optimization problem. There are several solution algorithms for solving a constrained optimization problem. One approach is to convert a constrained problem into a series of unconstrained problems. Furthermore, unconstrained solution algorithms can be used as part of the constrained solution algorithms. Structural optimization is an iterative process where one starts with an initial design, a finite element structure analysis is then performed to calculate the response of the system (such as displacements, stresses, eigenvalues, etc.). Based upon the sensitivity information on the objective and constraint functions, an optimizer such as ADS or IDESIGN, can be used to find the new, improved design. For the structural analysis phase, the equation solver for the system of simultaneous, linear equations plays a key role since it is needed for either static, or eigenvalue, or dynamic analysis. For practical, large-scale structural analysis-synthesis applications, computational time can be excessively large. Thus, it is necessary to have a new structural analysis-synthesis code which employs new solution algorithms to exploit both parallel and vector capabilities offered by modern, high performance computers such as the Convex, Cray-2 and Cray-YMP computers. The objective of this research project is, therefore, to incorporate the latest development in the parallel-vector equation solver, PVSOLVE into the widely popular finite-element production code, such as the SAP-4. Furthermore, several nonlinear unconstrained optimization subroutines have also been developed and tested under a parallel computer environment. The unconstrained optimization subroutines are not only useful in their own right, but they can also be incorporated into a more popular constrained optimization code, such as ADS.

  1. Single neuron firing properties impact correlation-based population coding

    PubMed Central

    Hong, Sungho; Ratté, Stéphanie; Prescott, Steven A.; De Schutter, Erik

    2012-01-01

    Correlated spiking has been widely observed but its impact on neural coding remains controversial. Correlation arising from co-modulation of rates across neurons has been shown to vary with the firing rates of individual neurons. This translates into rate and correlation being equivalently tuned to the stimulus; under those conditions, correlated spiking does not provide information beyond that already available from individual neuron firing rates. Such correlations are irrelevant and can reduce coding efficiency by introducing redundancy. Using simulations and experiments in rat hippocampal neurons, we show here that pairs of neurons receiving correlated input also exhibit correlations arising from precise spike-time synchronization. Contrary to rate co-modulation, spike-time synchronization is unaffected by firing rate, thus enabling synchrony- and rate-based coding to operate independently. The type of output correlation depends on whether intrinsic neuron properties promote integration or coincidence detection: “ideal” integrators (with spike generation sensitive to stimulus mean) exhibit rate co-modulation whereas “ideal” coincidence detectors (with spike generation sensitive to stimulus variance) exhibit precise spike-time synchronization. Pyramidal neurons are sensitive to both stimulus mean and variance, and thus exhibit both types of output correlation proportioned according to which operating mode is dominant. Our results explain how different types of correlations arise based on how individual neurons generate spikes, and why spike-time synchronization and rate co-modulation can encode different stimulus properties. Our results also highlight the importance of neuronal properties for population-level coding insofar as neural networks can employ different coding schemes depending on the dominant operating mode of their constituent neurons. PMID:22279226

  2. SEE Sensitivity Analysis of 180 nm NAND CMOS Logic Cell for Space Applications

    NASA Astrophysics Data System (ADS)

    Sajid, Muhammad

    2016-07-01

    This paper focus on Single Event Effects caused by energetic particle strike on sensitive locations in CMOS NAND logic cell designed in 180nm technology node to be operated in space radiation environment. The generation of SE transients as well as upsets as function of LET of incident particle has been determined for logic devices onboard LEO and GEO satellites. The minimum magnitude pulse and pulse-width for threshold LET was determined to estimate the vulnerability /susceptibility of device for heavy ion strike. The impact of temperature, strike location and logic state of NAND circuit on total SEU/SET rate was estimated with physical mechanism simulations using Visual TCAD, Genius, runSEU program and Crad computer codes.

  3. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser

    PubMed Central

    Almeida, Jonas S.; Iriabho, Egiebade E.; Gorrepati, Vijaya L.; Wilkinson, Sean R.; Grüneberg, Alexander; Robbins, David E.; Hackney, James R.

    2012-01-01

    Background: Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. Materials and Methods: ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Results: Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. Conclusions: The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local “download and installation”. PMID:22934238

  4. Copper benchmark experiment for the testing of JEFF-3.2 nuclear data for fusion applications

    NASA Astrophysics Data System (ADS)

    Angelone, M.; Flammini, D.; Loreti, S.; Moro, F.; Pillon, M.; Villar, R.; Klix, A.; Fischer, U.; Kodeli, I.; Perel, R. L.; Pohorecky, W.

    2017-09-01

    A neutronics benchmark experiment on a pure Copper block (dimensions 60 × 70 × 70 cm3) aimed at testing and validating the recent nuclear data libraries for fusion applications was performed in the frame of the European Fusion Program at the 14 MeV ENEA Frascati Neutron Generator (FNG). Reaction rates, neutron flux spectra and doses were measured using different experimental techniques (e.g. activation foils techniques, NE213 scintillator and thermoluminescent detectors). This paper first summarizes the analyses of the experiment carried-out using the MCNP5 Monte Carlo code and the European JEFF-3.2 library. Large discrepancies between calculation (C) and experiment (E) were found for the reaction rates both in the high and low neutron energy range. The analysis was complemented by sensitivity/uncertainty analyses (S/U) using the deterministic and Monte Carlo SUSD3D and MCSEN codes, respectively. The S/U analyses enabled to identify the cross sections and energy ranges which are mostly affecting the calculated responses. The largest discrepancy among the C/E values was observed for the thermal (capture) reactions indicating severe deficiencies in the 63,65Cu capture and elastic cross sections at lower rather than at high energy. Deterministic and MC codes produced similar results. The 14 MeV copper experiment and its analysis thus calls for a revision of the JEFF-3.2 copper cross section and covariance data evaluation. A new analysis of the experiment was performed with the MCNP5 code using the revised JEFF-3.3-T2 library released by NEA and a new, not yet distributed, revised JEFF-3.2 Cu evaluation produced by KIT. A noticeable improvement of the C/E results was obtained with both new libraries.

  5. ImageJS: Personalized, participated, pervasive, and reproducible image bioinformatics in the web browser.

    PubMed

    Almeida, Jonas S; Iriabho, Egiebade E; Gorrepati, Vijaya L; Wilkinson, Sean R; Grüneberg, Alexander; Robbins, David E; Hackney, James R

    2012-01-01

    Image bioinformatics infrastructure typically relies on a combination of server-side high-performance computing and client desktop applications tailored for graphic rendering. On the server side, matrix manipulation environments are often used as the back-end where deployment of specialized analytical workflows takes place. However, neither the server-side nor the client-side desktop solution, by themselves or combined, is conducive to the emergence of open, collaborative, computational ecosystems for image analysis that are both self-sustained and user driven. ImageJS was developed as a browser-based webApp, untethered from a server-side backend, by making use of recent advances in the modern web browser such as a very efficient compiler, high-end graphical rendering capabilities, and I/O tailored for code migration. Multiple versioned code hosting services were used to develop distinct ImageJS modules to illustrate its amenability to collaborative deployment without compromise of reproducibility or provenance. The illustrative examples include modules for image segmentation, feature extraction, and filtering. The deployment of image analysis by code migration is in sharp contrast with the more conventional, heavier, and less safe reliance on data transfer. Accordingly, code and data are loaded into the browser by exactly the same script tag loading mechanism, which offers a number of interesting applications that would be hard to attain with more conventional platforms, such as NIH's popular ImageJ application. The modern web browser was found to be advantageous for image bioinformatics in both the research and clinical environments. This conclusion reflects advantages in deployment scalability and analysis reproducibility, as well as the critical ability to deliver advanced computational statistical procedures machines where access to sensitive data is controlled, that is, without local "download and installation".

  6. Total-dose radiation effects data for semiconductor devices, volume 3

    NASA Technical Reports Server (NTRS)

    Price, W. E.; Martin, K. E.; Nichols, D. K.; Gauthier, M. K.; Brown, S. F.

    1982-01-01

    Volume 3 of this three-volume set provides a detailed analysis of the data in Volumes 1 and 2, most of which was generated for the Galileo Orbiter Program in support of NASA space programs. Volume 1 includes total ionizing dose radiation test data on diodes, bipolar transistors, field effect transistors, and miscellaneous discrete solid-state devices. Volume 2 includes similar data on integrated circuits and a few large-scale integrated circuits. The data of Volumes 1 and 2 are combined in graphic format in Volume 3 to provide a comparison of radiation sensitivities of devices of a given type and different manufacturer, a comparison of multiple tests for a single data code, a comparison of multiple tests for a single lot, and a comparison of radiation sensitivities vs time (date codes). All data were generated using a steady-state 2.5-MeV electron source (Dynamitron) or a Cobalt-60 gamma ray source. The data that compose Volume 3 represent 26 different device types, 224 tests, and a total of 1040 devices. A comparison of the effects of steady-state electrons and Cobat-60 gamma rays is also presented.

  7. A collision history-based approach to Sensitivity/Perturbation calculations in the continuous energy Monte Carlo code SERPENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giuseppe Palmiotti

    In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.

  8. Adaptive Correlation Space Adjusted Open-Loop Tracking Approach for Vehicle Positioning with Global Navigation Satellite System in Urban Areas

    PubMed Central

    Ruan, Hang; Li, Jian; Zhang, Lei; Long, Teng

    2015-01-01

    For vehicle positioning with Global Navigation Satellite System (GNSS) in urban areas, open-loop tracking shows better performance because of its high sensitivity and superior robustness against multipath. However, no previous study has focused on the effects of the code search grid size on the code phase measurement accuracy of open-loop tracking. Traditional open-loop tracking methods are performed by the batch correlators with fixed correlation space. The code search grid size, which is the correlation space, is a constant empirical value and the code phase measuring accuracy will be largely degraded due to the improper grid size, especially when the signal carrier-to-noise density ratio (C/N0) varies. In this study, the Adaptive Correlation Space Adjusted Open-Loop Tracking Approach (ACSA-OLTA) is proposed to improve the code phase measurement dependent pseudo range accuracy. In ACSA-OLTA, the correlation space is adjusted according to the signal C/N0. The novel Equivalent Weighted Pseudo Range Error (EWPRE) is raised to obtain the optimal code search grid sizes for different C/N0. The code phase measuring errors of different measurement calculation methods are analyzed for the first time. The measurement calculation strategy of ACSA-OLTA is derived from the analysis to further improve the accuracy but reduce the correlator consumption. Performance simulation and real tests confirm that the pseudo range and positioning accuracy of ASCA-OLTA are better than the traditional open-loop tracking methods in the usual scenarios of urban area. PMID:26343683

  9. SNL/JAEA Collaborations on Sodium Fire Benchmarking.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Andrew Jordan; Denman, Matthew R; Takata, Takashi

    Two sodium spray fire experiments performed by Sandia National Laboratories (SNL) were used for a code - to - code comparison between CONTAIN - LMR and SPHINCS. Both computer codes are used for modeling sodium accidents in sodium fast reactors. The comparison between the two codes provides insights into the ability of both codes to model sodium spray fires. The SNL T3 and T4 experiments are 20 kg sodium spray fires with sodium spray temperature s of 200 deg C and 500 deg C, respe ctively. Given the relatively low sodium temperature in the SNL T3 experiment, the sodium spraymore » experienced a period of non - combustion. The vessel in the SNL T4 experiment experienced a rapid pressurization that caused of the instrumentation ports to fail during the sodium spray. Despite these unforeseen difficulties, both codes were shown in good agreement with the experiment s . The subsequent pool fire that develops from the unburned sodium spray is a significant characteristic of the T3 experiment. SPHIN CS showed better long - term agreement with the SNL T3 experiment than CONTAIN - LMR. The unexpected port failure during the SNL T4 experiment presented modelling challenges. The time at which the port failure occurred is unknown, but is believed to have occur red at about 11 seconds into the sodium spray fire. The sensitivity analysis for the SNL T4 experiment shows that with a port failure, the sodium spray fire can still maintain elevated pressures during the spray.« less

  10. Cost-Sensitive Local Binary Feature Learning for Facial Age Estimation.

    PubMed

    Lu, Jiwen; Liong, Venice Erin; Zhou, Jie

    2015-12-01

    In this paper, we propose a cost-sensitive local binary feature learning (CS-LBFL) method for facial age estimation. Unlike the conventional facial age estimation methods that employ hand-crafted descriptors or holistically learned descriptors for feature representation, our CS-LBFL method learns discriminative local features directly from raw pixels for face representation. Motivated by the fact that facial age estimation is a cost-sensitive computer vision problem and local binary features are more robust to illumination and expression variations than holistic features, we learn a series of hashing functions to project raw pixel values extracted from face patches into low-dimensional binary codes, where binary codes with similar chronological ages are projected as close as possible, and those with dissimilar chronological ages are projected as far as possible. Then, we pool and encode these local binary codes within each face image as a real-valued histogram feature for face representation. Moreover, we propose a cost-sensitive local binary multi-feature learning method to jointly learn multiple sets of hashing functions using face patches extracted from different scales to exploit complementary information. Our methods achieve competitive performance on four widely used face aging data sets.

  11. Management of a CFD organization in support of space hardware development

    NASA Technical Reports Server (NTRS)

    Schutzenhofer, L. A.; Mcconnaughey, P. K.; Mcconnaughey, H. V.; Wang, T. S.

    1991-01-01

    The management strategy of NASA-Marshall's CFD branch in support of space hardware development and code validation implements various elements of total quality management. The strategy encompasses (1) a teaming strategy which focuses on the most pertinent problem, (2) quick-turnaround analysis, (3) the evaluation of retrofittable design options through sensitivity analysis, and (4) coordination between the chief engineer and the hardware contractors. Advanced-technology concepts are being addressed via the definition of technology-development projects whose products are transferable to hardware programs and the integration of research activities with industry, government agencies, and universities, on the basis of the 'consortium' concept.

  12. Reduced-Order Models for the Aeroelastic Analysis of Ares Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.

    2010-01-01

    This document presents the development and application of unsteady aerodynamic, structural dynamic, and aeroelastic reduced-order models (ROMs) for the ascent aeroelastic analysis of the Ares I-X flight test and Ares I crew launch vehicles using the unstructured-grid, aeroelastic FUN3D computational fluid dynamics (CFD) code. The purpose of this work is to perform computationally-efficient aeroelastic response calculations that would be prohibitively expensive via computation of multiple full-order aeroelastic FUN3D solutions. These efficient aeroelastic ROM solutions provide valuable insight regarding the aeroelastic sensitivity of the vehicles to various parameters over a range of dynamic pressures.

  13. Timing group delay and differential code bias corrections for BeiDou positioning

    NASA Astrophysics Data System (ADS)

    Guo, Fei; Zhang, Xiaohong; Wang, Jinling

    2015-05-01

    This article first clearly figures out the relationship between parameters of timing group delay (TGD) and differential code bias (DCB) for BDS, and demonstrates the equivalence of TGD and DCB correction models combining theory with practice. The TGD/DCB correction models have been extended to various occasions for BDS positioning, and such models have been evaluated by real triple-frequency datasets. To test the effectiveness of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both standard point positioning (SPP) and precise point positioning (PPP) tests are carried out for BDS signals with different schemes. Furthermore, the influence of differential code biases on BDS positioning estimates such as coordinates, receiver clock biases, tropospheric delays and carrier phase ambiguities is investigated comprehensively. Comparative analysis show that the unmodeled differential code biases degrade the performance of BDS SPP by a factor of two or more, whereas the estimates of PPP are subject to varying degrees of influences. For SPP, the accuracy of dual-frequency combinations is slightly worse than that of single-frequency, and they are much more sensitive to the differential code biases, particularly for the B2B3 combination. For PPP, the uncorrected differential code biases are mostly absorbed into the receiver clock bias and carrier phase ambiguities and thus resulting in a much longer convergence time. Even though the influence of the differential code biases could be mitigated over time and comparable positioning accuracy could be achieved after convergence, it is suggested to properly handle with the differential code biases since it is vital for PPP convergence and integer ambiguity resolution.

  14. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  15. PharmacoGx: an R package for analysis of large pharmacogenomic datasets.

    PubMed

    Smirnov, Petr; Safikhani, Zhaleh; El-Hachem, Nehme; Wang, Dong; She, Adrian; Olsen, Catharina; Freeman, Mark; Selby, Heather; Gendoo, Deena M A; Grossmann, Patrick; Beck, Andrew H; Aerts, Hugo J W L; Lupien, Mathieu; Goldenberg, Anna; Haibe-Kains, Benjamin

    2016-04-15

    Pharmacogenomics holds great promise for the development of biomarkers of drug response and the design of new therapeutic options, which are key challenges in precision medicine. However, such data are scattered and lack standards for efficient access and analysis, consequently preventing the realization of the full potential of pharmacogenomics. To address these issues, we implemented PharmacoGx, an easy-to-use, open source package for integrative analysis of multiple pharmacogenomic datasets. We demonstrate the utility of our package in comparing large drug sensitivity datasets, such as the Genomics of Drug Sensitivity in Cancer and the Cancer Cell Line Encyclopedia. Moreover, we show how to use our package to easily perform Connectivity Map analysis. With increasing availability of drug-related data, our package will open new avenues of research for meta-analysis of pharmacogenomic data. PharmacoGx is implemented in R and can be easily installed on any system. The package is available from CRAN and its source code is available from GitHub. bhaibeka@uhnresearch.ca or benjamin.haibe.kains@utoronto.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. High-Speed Digital Interferometry

    NASA Technical Reports Server (NTRS)

    De Vine, Glenn; Shaddock, Daniel A.; Ware, Brent; Spero, Robert E.; Wuchenich, Danielle M.; Klipstein, William M.; McKenzie, Kirk

    2012-01-01

    Digitally enhanced heterodyne interferometry (DI) is a laser metrology technique employing pseudo-random noise (PRN) codes phase-modulated onto an optical carrier. Combined with heterodyne interferometry, the PRN code is used to select individual signals, returning the inherent interferometric sensitivity determined by the optical wavelength. The signal isolation arises from the autocorrelation properties of the PRN code, enabling both rejection of spurious signals (e.g., from scattered light) and multiplexing capability using a single metrology system. The minimum separation of optical components is determined by the wavelength of the PRN code.

  17. Performance Enhancement by Threshold Level Control of a Receiver in WDM-PON System with Manchester Coded Downstream and NRZ Upstream Re-Modulation

    NASA Astrophysics Data System (ADS)

    Kim, Bong Kyu; Chung, Hwan Seok; Chang, Sun Hyok; Park, Sangjo

    We propose and demonstrate a scheme enhancing the performance of optical access networks with Manchester coded downstream and re-modulated NRZ coded upstream. It is achieved by threshold level control of a limiting amplifier at a receiver, and the minimum sensitivity of upstream is significantly improved for the re-modulation scheme with 5Gb/s Manchester coded downstream and 2.488Gb/s NRZ upstream data rates.

  18. A CyberCIEGE Scenario Illustrating Secrecy Issues in an Internal Corporate Network Connected to the Internet

    DTIC Science & Technology

    2004-09-01

    to provide access to and protect are the NG Game Code, Employee Files, E -MAIL, Marketing Plans and Legacy Code. The NG Game Code is the MOST...major hit. E -MAIL is classified NON- SENSITIVE. The Marketing Plans are for the NG Game Code. They contain information concerning what the new...simulation games currently on the market except that, rather than allowing players to choose rides, refreshments and facilities, CyberCIEGE will

  19. Exploration of association rule mining for coding consistency and completeness assessment in inpatient administrative health data.

    PubMed

    Peng, Mingkai; Sundararajan, Vijaya; Williamson, Tyler; Minty, Evan P; Smith, Tony C; Doktorchik, Chelsea T A; Quan, Hude

    2018-03-01

    Data quality assessment is a challenging facet for research using coded administrative health data. Current assessment approaches are time and resource intensive. We explored whether association rule mining (ARM) can be used to develop rules for assessing data quality. We extracted 2013 and 2014 records from the hospital discharge abstract database (DAD) for patients between the ages of 55 and 65 from five acute care hospitals in Alberta, Canada. The ARM was conducted using the 2013 DAD to extract rules with support ≥0.0019 and confidence ≥0.5 using the bootstrap technique, and tested in the 2014 DAD. The rules were compared against the method of coding frequency and assessed for their ability to detect error introduced by two kinds of data manipulation: random permutation and random deletion. The association rules generally had clear clinical meanings. Comparing 2014 data to 2013 data (both original), there were 3 rules with a confidence difference >0.1, while coding frequency difference of codes in the right hand of rules was less than 0.004. After random permutation of 50% of codes in the 2014 data, average rule confidence dropped from 0.72 to 0.27 while coding frequency remained unchanged. Rule confidence decreased with the increase of coding deletion, as expected. Rule confidence was more sensitive to code deletion compared to coding frequency, with slope of change ranging from 1.7 to 184.9 with a median of 9.1. The ARM is a promising technique to assess data quality. It offers a systematic way to derive coding association rules hidden in data, and potentially provides a sensitive and efficient method of assessing data quality compared to standard methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Fault trees for decision making in systems analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lambert, Howard E.

    1975-10-09

    The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less

  1. Commercial turbofan engine exhaust nozzle flow analyses using PAB3D

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Uenishi, K.; Carlson, John R.; Keith, B. D.

    1992-01-01

    Recent developments of a three-dimensional (PAB3D) code have paved the way for a computational investigation of complex aircraft aerodynamic components. The PAB3D code was developed for solving the simplified Reynolds Averaged Navier-Stokes equations in a three-dimensional multiblock/multizone structured mesh domain. The present analysis was applied to commercial turbofan exhaust flow systems. Solution sensitivity to grid density is presented. Laminar flow solutions were developed for all grids and two-equation k-epsilon solutions were developed for selected grids. Static pressure distributions, mass flow and thrust quantities were calculated for on-design engine operating conditions. Good agreement between predicted surface static pressures and experimental data was observed at different locations. Mass flow was predicted within 0.2 percent of experimental data. Thrust forces were typically within 0.4 percent of experimental data.

  2. Are implicit motives revealed in mere words? Testing the marker-word hypothesis with computer-based text analysis

    PubMed Central

    Schultheiss, Oliver C.

    2013-01-01

    Traditionally, implicit motives (i.e., non-conscious preferences for specific classes of incentives) are assessed through semantic coding of imaginative stories. The present research tested the marker-word hypothesis, which states that implicit motives are reflected in the frequencies of specific words. Using Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2001), Study 1 identified word categories that converged with a content-coding measure of the implicit motives for power, achievement, and affiliation in picture stories collected in German and US student samples, showed discriminant validity with self-reported motives, and predicted well-validated criteria of implicit motives (gender difference for the affiliation motive; in interaction with personal-goal progress: emotional well-being). Study 2 demonstrated LIWC-based motive scores' causal validity by documenting their sensitivity to motive arousal. PMID:24137149

  3. Aeras: A next generation global atmosphere model

    DOE PAGES

    Spotz, William F.; Smith, Thomas M.; Demeshko, Irina P.; ...

    2015-06-01

    Sandia National Laboratories is developing a new global atmosphere model named Aeras that is performance portable and supports the quantification of uncertainties. These next-generation capabilities are enabled by building Aeras on top of Albany, a code base that supports the rapid development of scientific application codes while leveraging Sandia's foundational mathematics and computer science packages in Trilinos and Dakota. Embedded uncertainty quantification (UQ) is an original design capability of Albany, and performance portability is a recent upgrade. Other required features, such as shell-type elements, spectral elements, efficient explicit and semi-implicit time-stepping, transient sensitivity analysis, and concurrent ensembles, were not componentsmore » of Albany as the project began, and have been (or are being) added by the Aeras team. We present early UQ and performance portability results for the shallow water equations.« less

  4. Four-dimensional key design in amplitude, phase, polarization and distance for optical encryption based on polarization digital holography and QR code.

    PubMed

    Lin, Chao; Shen, Xueju; Li, Baochen

    2014-08-25

    We demonstrate that all parameters of optical lightwave can be simultaneously designed as keys in security system. This multi-dimensional property of key can significantly enlarge the key space and further enhance the security level of the system. The single-shot off-axis digital holography with orthogonal polarized reference waves is employed to perform polarization state recording on object wave. Two pieces of polarization holograms are calculated and fabricated to be arranged in reference arms to generate random amplitude and phase distribution respectively. When reconstruction, original information which is represented with QR code can be retrieved using Fresnel diffraction with decryption keys and read out noise-free. Numerical simulation results for this cryptosystem are presented. An analysis on the key sensitivity and fault tolerance properties are also provided.

  5. Identification of novel diagnostic biomarkers for thyroid carcinoma.

    PubMed

    Wang, Xiliang; Zhang, Qing; Cai, Zhiming; Dai, Yifan; Mou, Lisha

    2017-12-19

    Thyroid carcinoma (THCA) is the most universal endocrine malignancy worldwide. Unfortunately, a limited number of large-scale analyses have been performed to identify biomarkers for THCA. Here, we conducted a meta-analysis using 505 THCA patients and 59 normal controls from The Cancer Genome Atlas. After identifying differentially expressed long non-coding RNA (lncRNA) and protein coding genes (PCG), we found vast difference in various lncRNA-PCG co-expressed pairs in THCA. A dysregulation network with scale-free topology was constructed. Four molecules (LA16c-380H5.2, RP11-203J24.8, MLF1 and SDC4) could potentially serve as diagnostic biomarkers of THCA with high sensitivity and specificity. We further represent a diagnostic panel with expression cutoff values. Our results demonstrate the potential application of those four molecules as novel independent biomarkers for THCA diagnosis.

  6. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  7. Validity of the International Classification of Diseases 10th revision code for hyperkalaemia in elderly patients at presentation to an emergency department and at hospital admission

    PubMed Central

    Fleet, Jamie L; Shariff, Salimah Z; Gandhi, Sonja; Weir, Matthew A; Jain, Arsh K; Garg, Amit X

    2012-01-01

    Objectives Evaluate the validity of the International Classification of Diseases, 10th revision (ICD-10) code for hyperkalaemia (E87.5) in two settings: at presentation to an emergency department and at hospital admission. Design Population-based validation study. Setting 12 hospitals in Southwestern Ontario, Canada, from 2003 to 2010. Participants Elderly patients with serum potassium values at presentation to an emergency department (n=64 579) and at hospital admission (n=64 497). Primary outcome Sensitivity, specificity, positive-predictive value and negative-predictive value. Serum potassium values in patients with and without a hyperkalaemia code (code positive and code negative, respectively). Results The sensitivity of the best-performing ICD-10 coding algorithm for hyperkalaemia (defined by serum potassium >5.5 mmol/l) was 14.1% (95% CI 12.5% to 15.9%) at presentation to an emergency department and 14.6% (95% CI 13.3% to 16.1%) at hospital admission. Both specificities were greater than 99%. In the two settings, the positive-predictive values were 83.2% (95% CI 78.4% to 87.1%) and 62.0% (95% CI 57.9% to 66.0%), while the negative-predictive values were 97.8% (95% CI 97.6% to 97.9%) and 96.9% (95% CI 96.8% to 97.1%). In patients who were code positive for hyperkalaemia, median (IQR) serum potassium values were 6.1 (5.7 to 6.8) mmol/l at presentation to an emergency department and 6.0 (5.1 to 6.7) mmol/l at hospital admission. For code-negative patients median (IQR) serum potassium values were 4.0 (3.7 to 4.4) mmol/l and 4.1 (3.8 to 4.5) mmol/l in each of the two settings, respectively. Conclusions Patients with hospital encounters who were ICD-10 E87.5 hyperkalaemia code positive and negative had distinct higher and lower serum potassium values, respectively. However, due to very low sensitivity, the incidence of hyperkalaemia is underestimated. PMID:23274674

  8. Advances in Geologic Disposal System Modeling and Application to Crystalline Rock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.

    The Used Fuel Disposition Campaign (UFDC) of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of used nuclear fuel (UNF) and high-level nuclear waste (HLW). Two of the high priorities for UFDC disposal R&D are design concept development and disposal system modeling (DOE 2011). These priorities are directly addressed in the UFDC Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic mediamore » (e.g., salt, granite, clay, and deep borehole disposal). This report describes specific GDSA activities in fiscal year 2016 (FY 2016) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code and the Dakota uncertainty sampling and propagation code. Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through engineered barriers and natural geologic barriers to the biosphere. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.« less

  9. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons with Tram Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment, an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  10. Tiltrotor Aeroacoustic Code (TRAC) Prediction Assessment and Initial Comparisons With TRAM Test Data

    NASA Technical Reports Server (NTRS)

    Burley, Casey L.; Brooks, Thomas F.; Charles, Bruce D.; McCluer, Megan

    1999-01-01

    A prediction sensitivity assessment to inputs and blade modeling is presented for the TiltRotor Aeroacoustic Code (TRAC). For this study, the non-CFD prediction system option in TRAC is used. Here, the comprehensive rotorcraft code, CAMRAD.Mod 1, coupled with the high-resolution sectional loads code HIRES, predicts unsteady blade loads to be used in the noise prediction code WOPWOP. The sensitivity of the predicted blade motions, blade airloads, wake geometry, and acoustics is examined with respect to rotor rpm, blade twist and chord, and to blade dynamic modeling. To accomplish this assessment. an interim input-deck for the TRAM test model and an input-deck for a reference test model are utilized in both rigid and elastic modes. Both of these test models are regarded as near scale models of the V-22 proprotor (tiltrotor). With basic TRAC sensitivities established, initial TRAC predictions are compared to results of an extensive test of an isolated model proprotor. The test was that of the TiltRotor Aeroacoustic Model (TRAM) conducted in the Duits-Nederlandse Windtunnel (DNW). Predictions are compared to measured noise for the proprotor operating over an extensive range of conditions. The variation of predictions demonstrates the great care that must be taken in defining the blade motion. However, even with this variability, the predictions using the different blade modeling successfully capture (bracket) the levels and trends of the noise for conditions ranging from descent to ascent.

  11. DESIGN CHARACTERISTICS OF THE IDAHO NATIONAL LABORATORY HIGH-TEMPERATURE GAS-COOLED TEST REACTOR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sterbentz, James; Bayless, Paul; Strydom, Gerhard

    2016-11-01

    Uncertainty and sensitivity analysis is an indispensable element of any substantial attempt in reactor simulation validation. The quantification of uncertainties in nuclear engineering has grown more important and the IAEA Coordinated Research Program (CRP) on High-Temperature Gas Cooled Reactor (HTGR) initiated in 2012 aims to investigate the various uncertainty quantification methodologies for this type of reactors. The first phase of the CRP is dedicated to the estimation of cell and lattice model uncertainties due to the neutron cross sections co-variances. Phase II is oriented towards the investigation of propagated uncertainties from the lattice to the coupled neutronics/thermal hydraulics core calculations.more » Nominal results for the prismatic single block (Ex.I-2a) and super cell models (Ex.I-2c) have been obtained using the SCALE 6.1.3 two-dimensional lattice code NEWT coupled to the TRITON sequence for cross section generation. In this work, the TRITON/NEWT-flux-weighted cross sections obtained for Ex.I-2a and various models of Ex.I-2c is utilized to perform a sensitivity analysis of the MHTGR-350 core power densities and eigenvalues. The core solutions are obtained with the INL coupled code PHISICS/RELAP5-3D, utilizing a fixed-temperature feedback for Ex. II-1a.. It is observed that the core power density does not vary significantly in shape, but the magnitude of these variations increases as the moderator-to-fuel ratio increases in the super cell lattice models.« less

  12. Mitochondrial sequence analysis for forensic identification using pyrosequencing technology.

    PubMed

    Andréasson, H; Asp, A; Alderborn, A; Gyllensten, U; Allen, M

    2002-01-01

    Over recent years, requests for mtDNA analysis in the field of forensic medicine have notably increased, and the results of such analyses have proved to be very useful in forensic cases where nuclear DNA analysis cannot be performed. Traditionally, mtDNA has been analyzed by DNA sequencing of the two hypervariable regions, HVI and HVII, in the D-loop. DNA sequence analysis using the conventional Sanger sequencing is very robust but time consuming and labor intensive. By contrast, mtDNA analysis based on the pyrosequencing technology provides fast and accurate results from the human mtDNA present in many types of evidence materials in forensic casework. The assay has been developed to determine polymorphic sites in the mitochondrial D-loop as well as the coding region to further increase the discrimination power of mtDNA analysis. The pyrosequencing technology for analysis of mtDNA polymorphisms has been tested with regard to sensitivity, reproducibility, and success rate when applied to control samples and actual casework materials. The results show that the method is very accurate and sensitive; the results are easily interpreted and provide a high success rate on casework samples. The panel of pyrosequencing reactions for the mtDNA polymorphisms were chosen to result in an optimal discrimination power in relation to the number of bases determined.

  13. Predicting Regulatory Compliance in Beer Advertising on Facebook.

    PubMed

    Noel, Jonathan K; Babor, Thomas F

    2017-11-01

    The prevalence of alcohol advertising has been growing on social media platforms. The purpose of this study was to evaluate alcohol advertising on Facebook for regulatory compliance and thematic content. A total of 50 Budweiser and Bud Light ads posted on Facebook within 1 month of the 2015 NFL Super Bowl were evaluated for compliance with a self-regulated alcohol advertising code and for thematic content. An exploratory sensitivity/specificity analysis was conducted to determine if thematic content could predict code violations. The code violation rate was 82%, with violations prevalent in guidelines prohibiting the association of alcohol with success (Guideline 5) and health benefits (Guideline 3). Overall, 21 thematic content areas were identified. Displaying the product (62%) and adventure/sensation seeking (52%) were the most prevalent. There was perfect specificity (100%) for 10 content areas for detecting any code violation (animals, negative emotions, positive emotions, games/contests/promotions, female characters, minorities, party, sexuality, night-time, sunrise) and high specificity (>80%) for 10 content areas for detecting violations of guidelines intended to protect minors (animals, negative emotions, famous people, friendship, games/contests/promotions, minorities, responsibility messages, sexuality, sunrise, video games). The high prevalence of code violations indicates a failure of self-regulation to prevent potentially harmful content from appearing in alcohol advertising, including explicit code violations (e.g. sexuality). Routine violations indicate an unwillingness to restrict advertising content for public health purposes, and statutory restrictions may be necessary to sufficiently deter alcohol producers from repeatedly violating marketing codes. Violations of a self-regulated alcohol advertising code are prevalent in a sample of beer ads published on Facebook near the US National Football League's Super Bowl. Overall, 16 thematic content areas demonstrated high specificity for code violations. Alcohol advertising codes should be updated to expressly prohibit the use of such content. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  14. Extracting information from the text of electronic medical records to improve case detection: a systematic review

    PubMed Central

    Carroll, John A; Smith, Helen E; Scott, Donia; Cassell, Jackie A

    2016-01-01

    Background Electronic medical records (EMRs) are revolutionizing health-related research. One key issue for study quality is the accurate identification of patients with the condition of interest. Information in EMRs can be entered as structured codes or unstructured free text. The majority of research studies have used only coded parts of EMRs for case-detection, which may bias findings, miss cases, and reduce study quality. This review examines whether incorporating information from text into case-detection algorithms can improve research quality. Methods A systematic search returned 9659 papers, 67 of which reported on the extraction of information from free text of EMRs with the stated purpose of detecting cases of a named clinical condition. Methods for extracting information from text and the technical accuracy of case-detection algorithms were reviewed. Results Studies mainly used US hospital-based EMRs, and extracted information from text for 41 conditions using keyword searches, rule-based algorithms, and machine learning methods. There was no clear difference in case-detection algorithm accuracy between rule-based and machine learning methods of extraction. Inclusion of information from text resulted in a significant improvement in algorithm sensitivity and area under the receiver operating characteristic in comparison to codes alone (median sensitivity 78% (codes + text) vs 62% (codes), P = .03; median area under the receiver operating characteristic 95% (codes + text) vs 88% (codes), P = .025). Conclusions Text in EMRs is accessible, especially with open source information extraction algorithms, and significantly improves case detection when combined with codes. More harmonization of reporting within EMR studies is needed, particularly standardized reporting of algorithm accuracy metrics like positive predictive value (precision) and sensitivity (recall). PMID:26911811

  15. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  16. Gene expression analysis upon lncRNA DDSR1 knockdown in human fibroblasts

    PubMed Central

    Jia, Li; Sun, Zhonghe; Wu, Xiaolin; Misteli, Tom; Sharma, Vivek

    2015-01-01

    Long non-coding RNAs (lncRNAs) play important roles in regulating diverse biological processes including DNA damage and repair. We have recently reported that the DNA damage inducible lncRNA DNA damage-sensitive RNA1 (DDSR1) regulates DNA repair by homologous recombination (HR). Since lncRNAs also modulate gene expression, we identified gene expression changes upon DDSR1 knockdown in human fibroblast cells. Gene expression analysis after RNAi treatment targeted against DDSR1 revealed 119 genes that show differential expression. Here we provide a detailed description of the microarray data (NCBI GEO accession number GSE67048) and the data analysis procedure associated with the publication by Sharma et al., 2015 in EMBO Reports [1]. PMID:26697398

  17. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  18. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  19. Modeling carbon production and transport during ELMs in DIII-D

    NASA Astrophysics Data System (ADS)

    Hogan, J.; Wade, M.; Coster, D.; Lasnier, C.

    2004-11-01

    Large-scale Type I ELM events could provide a significant C source in ITER, and C production rates depend on incident D flux density and surface temperature, quantities which can vary significantly during an ELM event. Recent progress on DIII-D has improved opportunities for code comparison. Fast time-scale measurements of divertor CIII evolution [1] and fast edge CER measurements of C profile evolution during low-density DIII-D LSN ELMy H-modes (type I) [2] have been modeled using the solps5.0/Eirene99 coupled edge code and time dependent thermal analysis codes. An ELM model based on characteristics of MHD peeling-ballooning modes reproduces the pedestal evolution. Qualitative agreement for the CIII evolution during an ELM event is found using the Roth et al annealing model for chemical sputtering and the sensitivity to other models is described. Significant ELM-to-ELM variations in observed maximum divertor target IR temperature during nominally identical ELMs are investigated with models for C emission from micron-scale dust particles. [1] M Groth, M Fenstermacher et al J Nucl Mater 2003, [2] M Wade, K Burrell et al PSI-16

  20. Directed Hidden-Code Extractor for Environment-Sensitive Malwares

    NASA Astrophysics Data System (ADS)

    Jia, Chunfu; Wang, Zhi; Lu, Kai; Liu, Xinhai; Liu, Xin

    Malware writers often use packing technique to hide malicious payload. A number of dynamic unpacking tools are.designed in order to identify and extract the hidden code in the packed malware. However, such unpacking methods.are all based on a highly controlled environment that is vulnerable to various anti-unpacking techniques. If execution.environment is suspicious, malwares may stay inactive for a long time or stop execution immediately to evade.detection. In this paper, we proposed a novel approach that automatically reasons about the environment requirements.imposed by malware, then directs a unpacking tool to change the controlled environment to extract the hide code at.the new environment. The experimental results show that our approach significantly increases the resilience of the.traditional unpacking tools to environment-sensitive malware.

  1. Comparison Of The Performance Of Hybrid Coders Under Different Configurations

    NASA Astrophysics Data System (ADS)

    Gunasekaran, S.; Raina J., P.

    1983-10-01

    Picture bandwidth reduction employing DPCM and Orthogonal Transform (OT) coding for TV transmission have been widely discussed in literature; both the techniques have their own advantages and limitations in terms of compression ratio, implementation, sensitivity to picture statistics and their sensitivity to the channel noise. Hybrid coding introduced by Habibi, - a cascade of the two techniques, offers excellent performance and proves to be attractive retaining the special advantages of both the techniques. In the recent times, the interest has shifted over to Hybrid coding, and in the absence of a report on the relative performance specifications of hybrid coders at different configurations, an attempt has been made to colate the information. Fourier, Hadamard, Slant, Sine, Cosine and Harr transforms have been considered for the present work.

  2. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  3. Administrative database concerns: accuracy of International Classification of Diseases, Ninth Revision coding is poor for preoperative anemia in patients undergoing spinal fusion.

    PubMed

    Golinvaux, Nicholas S; Bohl, Daniel D; Basques, Bryce A; Grauer, Jonathan N

    2014-11-15

    Cross-sectional study. To objectively evaluate the ability of International Classification of Diseases, Ninth Revision (ICD-9) codes, which are used as the foundation for administratively coded national databases, to identify preoperative anemia in patients undergoing spinal fusion. National database research in spine surgery continues to rise. However, the validity of studies based on administratively coded data, such as the Nationwide Inpatient Sample, are dependent on the accuracy of ICD-9 coding. Such coding has previously been found to have poor sensitivity to conditions such as obesity and infection. A cross-sectional study was performed at an academic medical center. Hospital-reported anemia ICD-9 codes (those used for administratively coded databases) were directly compared with the chart-documented preoperative hematocrits (true laboratory values). A patient was deemed to have preoperative anemia if the preoperative hematocrit was less than the lower end of the normal range (36.0% for females and 41.0% for males). The study included 260 patients. Of these, 37 patients (14.2%) were anemic; however, only 10 patients (3.8%) received an "anemia" ICD-9 code. Of the 10 patients coded as anemic, 7 were anemic by definition, whereas 3 were not, and thus were miscoded. This equates to an ICD-9 code sensitivity of 0.19, with a specificity of 0.99, and positive and negative predictive values of 0.70 and 0.88, respectively. This study uses preoperative anemia to demonstrate the potential inaccuracies of ICD-9 coding. These results have implications for publications using databases that are compiled from ICD-9 coding data. Furthermore, the findings of the current investigation raise concerns regarding the accuracy of additional comorbidities. Although administrative databases are powerful resources that provide large sample sizes, it is crucial that we further consider the quality of the data source relative to its intended purpose.

  4. Testing and modeling of PBX-9591 shock initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, Kim; Foley, Timothy; Novak, Alan

    2010-01-01

    This paper describes an ongoing effort to develop a detonation sensitivity test for PBX-9501 that is suitable for studying pristine and damaged HE. The approach involves testing and comparing the sensitivities of HE pressed to various densities and those of pre-damaged samples with similar porosities. The ultimate objectives are to understand the response of pre-damaged HE to shock impacts and to develop practical computational models for use in system analysis codes for HE safety studies. Computer simulation with the CTH shock physics code is used to aid the experimental design and analyze the test results. In the calculations, initiation andmore » growth or failure of detonation are modeled with the empirical HVRB model. The historical LANL SSGT and LSGT were reviewed and it was determined that a new, modified gap test be developed to satisfy the current requirements. In the new test, the donor/spacer/acceptor assembly is placed in a holder that is designed to work with fixtures for pre-damaging the acceptor sample. CTH simulations were made of the gap test with PBX-9501 samples pressed to three different densities. The calculated sensitivities were validated by test observations. The agreement between the computed and experimental critical gap thicknesses, ranging from 9 to 21 mm under various test conditions, is well within 1 mm. These results show that the numerical modeling is a valuable complement to the experimental efforts in studying and understanding shock initiation of PBX-9501.« less

  5. [Optimal Operational Definition of Patient with Peptic Ulcer Bleeding for Big Data Analysis Using Combination of Clinical Characteristics in a Secondary General Hospital].

    PubMed

    Lee, Jae Won; Kim, Hyun Ki; Woo, Yong Sik; Jahng, Jaehoon; Jin, Young Ran; Park, Jong Heon; Kim, Yong Sung; Jung, Hwoon Yong

    2016-08-25

    Peptic ulcer bleeding (PUB) is the most common cause of upper gastrointestinal bleeding in Korea but there has been no research done using big data. This study evaluates the optimal operational definition (OD) for big data research by analyzing clinical characteristics of PUB. We reviewed the clinical characteristics of 92 patients with PUB confirmed on endoscopy in Wonkwang University Sanbon Hospital (January 2013 to December 2014). We calculated sensitivity and positive predictive value (PPV) to detect confirmed PUB patients using ODs developed by combining clinical features of patients with PUB. The mean patient age was 63 years. Men had higher prevalence of PUB than women. Bleeding gastric ulcer was proportionately common in the age range of 40s to 60s in men, while a significantly higher rate of bleeding occurred in women older than 70s. The rate of drug-induced ulcer was 28.2%, whereas the prevalence of Helicobacter pylori was 47.8%. Among the hospitalized patients with diagnostic code of PUB, we ruled out patients with endoscopic removal of gastric adenoma or peritonitis, and selected patients who had been administered intravenous proton pump inhibitor. The sensitivity in this setting was 82.6%, and PPV was 88.4%. PUB was more common in older patients, and there was a clear gender difference in gastric ulcer bleeding by age. With a proper OD using PUB diagnostic codes, we can identify true patients with sufficiently high sensitivity and PPV.

  6. Accuracy of the new ICD-9-CM code for "drip-and-ship" thrombolytic treatment in patients with ischemic stroke.

    PubMed

    Tonarelli, Silvina B; Tibbs, Michael; Vazquez, Gabriela; Lakshminarayan, Kamakshi; Rodriguez, Gustavo J; Qureshi, Adnan I

    2012-02-01

    A new International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) diagnosis code, V45.88, was approved by the Centers for Medicare and Medicaid Services (CMS) on October 1, 2008. This code identifies patients in whom intravenous (IV) recombinant tissue plasminogen activator (rt-PA) is initiated in one hospital's emergency department, followed by transfer within 24 hours to a comprehensive stroke center, a paradigm commonly referred to as "drip-and-ship." This study assessed the use and accuracy of the new V45.88 code for identifying ischemic stroke patients who meet the criteria for drip-and-ship at 2 advanced certified primary stroke centers. Consecutive patients over a 12-month period were identified by primary ICD-9-CM diagnosis codes related to ischemic stroke. The accuracy of V45.88 code utilization using administrative data provided by Health Information Management Services was assessed through a comparison with data collected in prospective stroke registries maintained at each hospital by a trained abstractor. Out of a total of 428 patients discharged from both hospitals with a diagnosis of ischemic stroke, 37 patients were given ICD-9-CM code V45.88. The internally validated data from the prospective stroke database demonstrated that a total of 40 patients met the criteria for drip-and-ship. A concurrent comparison found that 92% (sensitivity) of the patients treated with drip-and-ship were coded with V45.88. None of the non-drip-and-ship stroke cases received the V45.88 code (100% specificity). The new ICD-9-CM code for drip-and-ship appears to have high specificity and sensitivity, allowing effective data collection by the CMS. Copyright © 2012 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  7. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  8. Population-based drug-related anaphylaxis in children and adolescents captured by South Carolina Emergency Room Hospital Discharge Database (SCERHDD) (2000-2002).

    PubMed

    West, Suzanne L; D'Aloisio, Aimee A; Ringel-Kulka, Tamar; Waller, Anna E; Clayton Bordley, W

    2007-12-01

    Anaphylaxis is a life-threatening condition; drug-related anaphylaxis represents approximately 10% of all cases. We assessed the utility of a statewide emergency department (ED) database for identifying drug-related anaphylaxis in children by developing and validating an algorithm composed of ICD-9-CM codes. There were 1 314,760 visits to South Carolina (SC) emergency departments (EDs) for patients <19 years in 2000-2002. We used ICD-9-CM disease or external cause of injury codes (E-codes) that suggested drug-related anaphylaxis or a severe drug-related allergic reaction. We found 50 cases classifiable as probable or possible drug-related anaphylaxis and 13 as drug-related allergic reactions. We used clinical evaluation by two pediatricians as the 'alloyed gold standard'1 for estimating sensitivity, specificity, and positive predictive value (PPV) of our algorithm. ED-treated drug-related anaphylaxis in the SC pediatric population was 1.56/100,000 person-years based on the algorithm and 0.50/100,000 person-years based on clinical evaluation. Assuming the disease codes we used identified all potential anaphylaxis cases in the database, the sensitivity was 1.00 (95%CI: 0.79, 1.00), specificity was 0.28 (95%CI: 0.16, 0.43), and the PPV was 0.32 (0.20, 0.47) for the algorithm. Sensitivity analyses improved the measurement properties of the algorithm. E-codes were invaluable for developing an anaphylaxis algorithm although the frequently used code of E947.9 was often incorrectly applied. We believe that our algorithm may have over-ascertained drug-related anaphylaxis patients seen in an ED, but the clinical evaluation may have under-represented this diagnosis due to limited information on the offending agent in the abstracted ED records. Post-marketing drug surveillance using ED records may be viable if clinicians were to document drug-related anaphylaxis in the charts so that billing codes could be assigned properly. Copyright 2007 John Wiley & Sons, Ltd.

  9. Bayesian decision support for coding occupational injury data.

    PubMed

    Nanda, Gaurav; Grattan, Kathleen M; Chu, MyDzung T; Davis, Letitia K; Lehto, Mark R

    2016-06-01

    Studies on autocoding injury data have found that machine learning algorithms perform well for categories that occur frequently but often struggle with rare categories. Therefore, manual coding, although resource-intensive, cannot be eliminated. We propose a Bayesian decision support system to autocode a large portion of the data, filter cases for manual review, and assist human coders by presenting them top k prediction choices and a confusion matrix of predictions from Bayesian models. We studied the prediction performance of Single-Word (SW) and Two-Word-Sequence (TW) Naïve Bayes models on a sample of data from the 2011 Survey of Occupational Injury and Illness (SOII). We used the agreement in prediction results of SW and TW models, and various prediction strength thresholds for autocoding and filtering cases for manual review. We also studied the sensitivity of the top k predictions of the SW model, TW model, and SW-TW combination, and then compared the accuracy of the manually assigned codes to SOII data with that of the proposed system. The accuracy of the proposed system, assuming well-trained coders reviewing a subset of only 26% of cases flagged for review, was estimated to be comparable (86.5%) to the accuracy of the original coding of the data set (range: 73%-86.8%). Overall, the TW model had higher sensitivity than the SW model, and the accuracy of the prediction results increased when the two models agreed, and for higher prediction strength thresholds. The sensitivity of the top five predictions was 93%. The proposed system seems promising for coding injury data as it offers comparable accuracy and less manual coding. Accurate and timely coded occupational injury data is useful for surveillance as well as prevention activities that aim to make workplaces safer. Copyright © 2016 Elsevier Ltd and National Safety Council. All rights reserved.

  10. Identification of mutant phenotypes associated with loss of individual microRNAs in sensitized genetic backgrounds in Caenorhabditis elegans

    PubMed Central

    Brenner, John L.; Jasiewicz, Kristen L.; Fahley, Alisha F.; Kemp, Benedict J.; Abbott, Allison L.

    2010-01-01

    Summary MicroRNAs (miRNAs) are small, non-coding RNAs that regulate the translation and/or the stability of their mRNA targets. Previous work showed that for most miRNA genes of C. elegans, single gene knockouts did not result in detectable mutant phenotypes [1]. This may be due, in part, to functional redundancy between miRNAs. However, in most cases, worms carrying deletions of all members of a miRNA family do not display strong mutant phenotypes [2]. They may function together with unrelated miRNAs or with non-miRNA genes in regulatory networks, possibly to ensure the robustness of developmental mechanisms. To test this, we examined worms lacking individual miRNAs in genetically sensitized backgrounds. These include genetic backgrounds with reduced processing and activity of all miRNAs or with reduced activity of a wide array of regulatory pathways [3]. Using these two approaches, mutant phenotypes were identified for 25 out of 31 miRNAs included in this analysis. Our findings describe biological roles for individual miRNAs and suggest that use of sensitized genetic backgrounds provides an efficient approach for miRNA functional analysis. PMID:20579881

  11. Dermal Sensitization Potential of DIGL-RP Solid Propellant in Guinea Pigs

    DTIC Science & Technology

    1989-10-01

    y ’,c. ADM$$S (ft, SWOt , &Wd ZIP Cod 7b. ADDRESS (City, State, arid ZIP Code) Letterman Army Institute of Research Fort Detrick Presidio of San...for contact sensitization. Toxicol Appl Pharmacol 1969; Suppl 3:90-102. 7. Buehler EV, Griffith JF. Experimental skin sensitization in the guinea pig

  12. Sequence-dependent modelling of local DNA bending phenomena: curvature prediction and vibrational analysis.

    PubMed

    Vlahovicek, K; Munteanu, M G; Pongor, S

    1999-01-01

    Bending is a local conformational micropolymorphism of DNA in which the original B-DNA structure is only distorted but not extensively modified. Bending can be predicted by simple static geometry models as well as by a recently developed elastic model that incorporate sequence dependent anisotropic bendability (SDAB). The SDAB model qualitatively explains phenomena including affinity of protein binding, kinking, as well as sequence-dependent vibrational properties of DNA. The vibrational properties of DNA segments can be studied by finite element analysis of a model subjected to an initial bending moment. The frequency spectrum is obtained by applying Fourier analysis to the displacement values in the time domain. This analysis shows that the spectrum of the bending vibrations quite sensitively depends on the sequence, for example the spectrum of a curved sequence is characteristically different from the spectrum of straight sequence motifs of identical basepair composition. Curvature distributions are genome-specific, and pronounced differences are found between protein-coding and regulatory regions, respectively, that is, sites of extreme curvature and/or bendability are less frequent in protein-coding regions. A WWW server is set up for the prediction of curvature and generation of 3D models from DNA sequences (http:@www.icgeb.trieste.it/dna).

  13. Monitoring the use and outcomes of new devices and procedures: how does coding affect what Hospital Episode Statistics contribute? Lessons from 12 emerging procedures 2006-10.

    PubMed

    Patrick, Hannah; Sims, Andrew; Burn, Julie; Bousfield, Derek; Colechin, Elaine; Reay, Christopher; Alderson, Neil; Goode, Stephen; Cunningham, David; Campbell, Bruce

    2013-03-01

    New devices and procedures are often introduced into health services when the evidence base for their efficacy and safety is limited. The authors sought to assess the availability and accuracy of routinely collected Hospital Episodes Statistics (HES) data in the UK and their potential contribution to the monitoring of new procedures. Four years of HES data (April 2006-March 2010) were analysed to identify episodes of hospital care involving a sample of 12 new interventional procedures. HES data were cross checked against other relevant sources including national or local registers and manufacturers' information. HES records were available for all 12 procedures during the entire study period. Comparative data sources were available from national (5), local (2) and manufacturer (2) registers. Factors found to affect comparisons were miscoding, alternative coding and inconsistent use of subsidiary codes. The analysis of provider coverage showed that HES is sensitive at detecting centres which carry out procedures, but specificity is poor in some cases. Routinely collected HES data have the potential to support quality improvements and evidence-based commissioning of devices and procedures in health services but achievement of this potential depends upon the accurate coding of procedures.

  14. Effective gene prediction by high resolution frequency estimator based on least-norm solution technique

    PubMed Central

    2014-01-01

    Linear algebraic concept of subspace plays a significant role in the recent techniques of spectrum estimation. In this article, the authors have utilized the noise subspace concept for finding hidden periodicities in DNA sequence. With the vast growth of genomic sequences, the demand to identify accurately the protein-coding regions in DNA is increasingly rising. Several techniques of DNA feature extraction which involves various cross fields have come up in the recent past, among which application of digital signal processing tools is of prime importance. It is known that coding segments have a 3-base periodicity, while non-coding regions do not have this unique feature. One of the most important spectrum analysis techniques based on the concept of subspace is the least-norm method. The least-norm estimator developed in this paper shows sharp period-3 peaks in coding regions completely eliminating background noise. Comparison of proposed method with existing sliding discrete Fourier transform (SDFT) method popularly known as modified periodogram method has been drawn on several genes from various organisms and the results show that the proposed method has better as well as an effective approach towards gene prediction. Resolution, quality factor, sensitivity, specificity, miss rate, and wrong rate are used to establish superiority of least-norm gene prediction method over existing method. PMID:24386895

  15. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  16. Validity of Diagnostic Codes for Acute Stroke in Administrative Databases: A Systematic Review

    PubMed Central

    McCormick, Natalie; Bhole, Vidula; Lacaille, Diane; Avina-Zubieta, J. Antonio

    2015-01-01

    Objective To conduct a systematic review of studies reporting on the validity of International Classification of Diseases (ICD) codes for identifying stroke in administrative data. Methods MEDLINE and EMBASE were searched (inception to February 2015) for studies: (a) Using administrative data to identify stroke; or (b) Evaluating the validity of stroke codes in administrative data; and (c) Reporting validation statistics (sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), or Kappa scores) for stroke, or data sufficient for their calculation. Additional articles were located by hand search (up to February 2015) of original papers. Studies solely evaluating codes for transient ischaemic attack were excluded. Data were extracted by two independent reviewers; article quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies tool. Results Seventy-seven studies published from 1976–2015 were included. The sensitivity of ICD-9 430-438/ICD-10 I60-I69 for any cerebrovascular disease was ≥ 82% in most [≥ 50%] studies, and specificity and NPV were both ≥ 95%. The PPV of these codes for any cerebrovascular disease was ≥ 81% in most studies, while the PPV specifically for acute stroke was ≤ 68%. In at least 50% of studies, PPVs were ≥ 93% for subarachnoid haemorrhage (ICD-9 430/ICD-10 I60), 89% for intracerebral haemorrhage (ICD-9 431/ICD-10 I61), and 82% for ischaemic stroke (ICD-9 434/ICD-10 I63 or ICD-9 434&436). For in-hospital deaths, sensitivity was 55%. For cerebrovascular disease or acute stroke as a cause-of-death on death certificates, sensitivity was ≤ 71% in most studies while PPV was ≥ 87%. Conclusions While most cases of prevalent cerebrovascular disease can be detected using 430-438/I60-I69 collectively, acute stroke must be defined using more specific codes. Most in-hospital deaths and death certificates with stroke as a cause-of-death correspond to true stroke deaths. Linking vital statistics and hospitalization data may improve the ascertainment of fatal stroke. PMID:26292280

  17. An Examination of the Reliability of the Organizational Assessment Package (OAP).

    DTIC Science & Technology

    1981-07-01

    reactiv- ity or pretest sensitization (Bracht and Glass, 1968) may occur. In this case, the change from pretest to posttest can be caused just by the...content items. The blocks for supervisor’s code were left blank, work group code was coded as all ones , and each person’s seminar number was coded in...63 5 19 .91 .74 5 (Work Group Effective- ness) 822 19 .83 .42 7 17 .90 .57 7 (Job Related Sati sfacti on ) 823 16 .91 .84 2 18 .93 .87 2 (Job Related

  18. Solving iTOUGH2 simulation and optimization problems using the PEST protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.; Zhang, Y.

    2011-02-01

    The PEST protocol has been implemented into the iTOUGH2 code, allowing the user to link any simulation program (with ASCII-based inputs and outputs) to iTOUGH2's sensitivity analysis, inverse modeling, and uncertainty quantification capabilities. These application models can be pre- or post-processors of the TOUGH2 non-isothermal multiphase flow and transport simulator, or programs that are unrelated to the TOUGH suite of codes. PEST-style template and instruction files are used, respectively, to pass input parameters updated by the iTOUGH2 optimization routines to the model, and to retrieve the model-calculated values that correspond to observable variables. We summarize the iTOUGH2 capabilities and demonstratemore » the flexibility added by the PEST protocol for the solution of a variety of simulation-optimization problems. In particular, the combination of loosely coupled and tightly integrated simulation and optimization routines provides both the flexibility and control needed to solve challenging inversion problems for the analysis of multiphase subsurface flow and transport systems.« less

  19. FEL Trajectory Analysis for the VISA Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter

    1998-10-06

    The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less

  20. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    NASA Astrophysics Data System (ADS)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.; Heidbrink, W. W.; Stagner, L.

    2016-02-01

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a ‘beam-in-a-box’ model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components produce first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.

  1. An Assessment of Some Design Constraints on Heat Production of a 3D Conceptual EGS Model Using an Open-Source Geothermal Reservoir Simulation Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yidong Xia; Mitch Plummer; Robert Podgorney

    2016-02-01

    Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less

  2. Implementation of a 3D halo neutral model in the TRANSP code and application to projected NSTX-U plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Medley, S. S.; Liu, D.; Gorelenkova, M. V.

    2016-01-12

    A 3D halo neutral code developed at the Princeton Plasma Physics Laboratory and implemented for analysis using the TRANSP code is applied to projected National Spherical Torus eXperiment-Upgrade (NSTX-U plasmas). The legacy TRANSP code did not handle halo neutrals properly since they were distributed over the plasma volume rather than remaining in the vicinity of the neutral beam footprint as is actually the case. The 3D halo neutral code uses a 'beam-in-a-box' model that encompasses both injected beam neutrals and resulting halo neutrals. Upon deposition by charge exchange, a subset of the full, one-half and one-third beam energy components producemore » first generation halo neutrals that are tracked through successive generations until an ionization event occurs or the descendant halos exit the box. The 3D halo neutral model and neutral particle analyzer (NPA) simulator in the TRANSP code have been benchmarked with the Fast-Ion D-Alpha simulation (FIDAsim) code, which provides Monte Carlo simulations of beam neutral injection, attenuation, halo generation, halo spatial diffusion, and photoemission processes. When using the same atomic physics database, TRANSP and FIDAsim simulations achieve excellent agreement on the spatial profile and magnitude of beam and halo neutral densities and the NPA energy spectrum. The simulations show that the halo neutral density can be comparable to the beam neutral density. These halo neutrals can double the NPA flux, but they have minor effects on the NPA energy spectrum shape. The TRANSP and FIDAsim simulations also suggest that the magnitudes of beam and halo neutral densities are relatively sensitive to the choice of the atomic physics databases.« less

  3. Oblique shock structures formed during the ablation phase of aluminium wire array z-pinches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swadling, G. F.; Lebedev, S. V.; Niasse, N.

    A series of experiments has been conducted in order to investigate the azimuthal structures formed by the interactions of cylindrically converging plasma flows during the ablation phase of aluminium wire array Z pinch implosions. These experiments were carried out using the 1.4 MA, 240 ns MAGPIE generator at Imperial College London. The main diagnostic used in this study was a two-colour, end-on, Mach-Zehnder imaging interferometer, sensitive to the axially integrated electron density of the plasma. The data collected in these experiments reveal the strongly collisional dynamics of the aluminium ablation streams. The structure of the flows is dominated by amore » dense network of oblique shock fronts, formed by supersonic collisions between adjacent ablation streams. An estimate for the range of the flow Mach number (M = 6.2-9.2) has been made based on an analysis of the observed shock geometry. Combining this measurement with previously published Thomson Scattering measurements of the plasma flow velocity by Harvey-Thompson et al.[Physics of Plasmas 19, 056303 (2012)] allowed us to place limits on the range of the ZT{sub e} of the plasma. The detailed and quantitative nature of the dataset lends itself well as a source for model validation and code verification exercises, as the exact shock geometry is sensitive to many of the plasma parameters. Comparison of electron density data produced through numerical modelling with the Gorgon 3D MHD code demonstrates that the code is able to reproduce the collisional dynamics observed in aluminium arrays reasonably well.« less

  4. Joint source-channel coding for motion-compensated DCT-based SNR scalable video.

    PubMed

    Kondi, Lisimachos P; Ishtiaq, Faisal; Katsaggelos, Aggelos K

    2002-01-01

    In this paper, we develop an approach toward joint source-channel coding for motion-compensated DCT-based scalable video coding and transmission. A framework for the optimal selection of the source and channel coding rates over all scalable layers is presented such that the overall distortion is minimized. The algorithm utilizes universal rate distortion characteristics which are obtained experimentally and show the sensitivity of the source encoder and decoder to channel errors. The proposed algorithm allocates the available bit rate between scalable layers and, within each layer, between source and channel coding. We present the results of this rate allocation algorithm for video transmission over a wireless channel using the H.263 Version 2 signal-to-noise ratio (SNR) scalable codec for source coding and rate-compatible punctured convolutional (RCPC) codes for channel coding. We discuss the performance of the algorithm with respect to the channel conditions, coding methodologies, layer rates, and number of layers.

  5. A MATLAB based 3D modeling and inversion code for MT data

    NASA Astrophysics Data System (ADS)

    Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.

    2017-07-01

    The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.

  6. Drug overdose surveillance using hospital discharge data.

    PubMed

    Slavova, Svetla; Bunn, Terry L; Talbert, Jeffery

    2014-01-01

    We compared three methods for identifying drug overdose cases in inpatient hospital discharge data on their ability to classify drug overdoses by intent and drug type(s) involved. We compared three International Classification of Diseases, Ninth Revision, Clinical Modification code-based case definitions using Kentucky hospital discharge data for 2000-2011. The first definition (Definition 1) was based on the external-cause-of-injury (E-code) matrix. The other two definitions were based on the Injury Surveillance Workgroup on Poisoning (ISW7) consensus recommendations for national and state poisoning surveillance using the principal diagnosis or first E-code (Definition 2) or any diagnosis/E-code (Definition 3). Definition 3 identified almost 50% more drug overdose cases than did Definition 1. The increase was largely due to cases with a first-listed E-code describing a drug overdose but a principal diagnosis that was different from drug overdose (e.g., mental disorders, or respiratory or circulatory system failure). Regardless of the definition, more than 53% of the hospitalizations were self-inflicted drug overdoses; benzodiazepines were involved in about 30% of the hospitalizations. The 2011 age-adjusted drug overdose hospitalization rate in Kentucky was 146/100,000 population using Definition 3 and 107/100,000 population using Definition 1. The ISW7 drug overdose definition using any drug poisoning diagnosis/E-code (Definition 3) is potentially the highest sensitivity definition for counting drug overdose hospitalizations, including by intent and drug type(s) involved. As the states enact policies and plan for adequate treatment resources, standardized drug overdose definitions are critical for accurate reporting, trend analysis, policy evaluation, and state-to-state comparison.

  7. Pressure Sensitive Tape and Label Surface Coating Industry: New Source Performance Standards (NSPS)

    EPA Pesticide Factsheets

    Learn about the New Source Performance Standards (NSPS) for pressure sensitive tape and label surface coating. Read the rule summary and history, and find the code of federal regulations and federal register citations.

  8. Administrative Algorithms to identify Avascular necrosis of bone among patients undergoing upper or lower extremity magnetic resonance imaging: a validation study.

    PubMed

    Barbhaiya, Medha; Dong, Yan; Sparks, Jeffrey A; Losina, Elena; Costenbader, Karen H; Katz, Jeffrey N

    2017-06-19

    Studies of the epidemiology and outcomes of avascular necrosis (AVN) require accurate case-finding methods. The aim of this study was to evaluate performance characteristics of a claims-based algorithm designed to identify AVN cases in administrative data. Using a centralized patient registry from a US academic medical center, we identified all adults aged ≥18 years who underwent magnetic resonance imaging (MRI) of an upper/lower extremity joint during the 1.5 year study period. A radiologist report confirming AVN on MRI served as the gold standard. We examined the sensitivity, specificity, positive predictive value (PPV) and positive likelihood ratio (LR + ) of four algorithms (A-D) using International Classification of Diseases, 9th edition (ICD-9) codes for AVN. The algorithms ranged from least stringent (Algorithm A, requiring ≥1 ICD-9 code for AVN [733.4X]) to most stringent (Algorithm D, requiring ≥3 ICD-9 codes, each at least 30 days apart). Among 8200 patients who underwent MRI, 83 (1.0% [95% CI 0.78-1.22]) had AVN by gold standard. Algorithm A yielded the highest sensitivity (81.9%, 95% CI 72.0-89.5), with PPV of 66.0% (95% CI 56.0-75.1). The PPV of algorithm D increased to 82.2% (95% CI 67.9-92.0), although sensitivity decreased to 44.6% (95% CI 33.7-55.9). All four algorithms had specificities >99%. An algorithm that uses a single billing code to screen for AVN among those who had MRI has the highest sensitivity and is best suited for studies in which further medical record review confirming AVN is feasible. Algorithms using multiple billing codes are recommended for use in administrative databases when further AVN validation is not feasible.

  9. A novel class sensitive hashing technique for large-scale content-based remote sensing image retrieval

    NASA Astrophysics Data System (ADS)

    Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo

    2017-10-01

    This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.

  10. Predicting Constraints on Ultra-Light Axion Parameters due to LSST Observations

    NASA Astrophysics Data System (ADS)

    Given, Gabriel; Grin, Daniel

    2018-01-01

    Ultra-light axions (ULAs) are a type of dark matter or dark energy candidate (depending on the mass) that are predicted to have a mass between $10^{‑33}$ and $10^{‑18}$ eV. The Large Synoptic Survey Telescope (LSST) is expected to provide a large number of weak lensing observations, which will lower the statistical uncertainty on the convergence power spectrum. I began work with Daniel Grin to predict how accurately the data from the LSST will be able to constrain ULA properties. I wrote Python code that takes a matter power spectrum calculated by axionCAMB and converts it to a convergence power spectrum. My code then takes derivatives of the convergence power spectrum with respect to several cosmological parameters; these derivatives will be used in Fisher Matrix analysis to determine the sensitivity of LSST observations to axion parameters.

  11. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.

  12. Conjunctive Coding of Complex Object Features

    PubMed Central

    Erez, Jonathan; Cusack, Rhodri; Kendall, William; Barense, Morgan D.

    2016-01-01

    Critical to perceiving an object is the ability to bind its constituent features into a cohesive representation, yet the manner by which the visual system integrates object features to yield a unified percept remains unknown. Here, we present a novel application of multivoxel pattern analysis of neuroimaging data that allows a direct investigation of whether neural representations integrate object features into a whole that is different from the sum of its parts. We found that patterns of activity throughout the ventral visual stream (VVS), extending anteriorly into the perirhinal cortex (PRC), discriminated between the same features combined into different objects. Despite this sensitivity to the unique conjunctions of features comprising objects, activity in regions of the VVS, again extending into the PRC, was invariant to the viewpoints from which the conjunctions were presented. These results suggest that the manner in which our visual system processes complex objects depends on the explicit coding of the conjunctions of features comprising them. PMID:25921583

  13. Performance Analysis of Direct-Sequence Code-Division Multiple-Access Communications with Asymmetric Quadrature Phase-Shift-Keying Modulation

    NASA Technical Reports Server (NTRS)

    Wang, C.-W.; Stark, W.

    2005-01-01

    This article considers a quaternary direct-sequence code-division multiple-access (DS-CDMA) communication system with asymmetric quadrature phase-shift-keying (AQPSK) modulation for unequal error protection (UEP) capability. Both time synchronous and asynchronous cases are investigated. An expression for the probability distribution of the multiple-access interference is derived. The exact bit-error performance and the approximate performance using a Gaussian approximation and random signature sequences are evaluated by extending the techniques used for uniform quadrature phase-shift-keying (QPSK) and binary phase-shift-keying (BPSK) DS-CDMA systems. Finally, a general system model with unequal user power and the near-far problem is considered and analyzed. The results show that, for a system with UEP capability, the less protected data bits are more sensitive to the near-far effect that occurs in a multiple-access environment than are the more protected bits.

  14. Identification of novel diagnostic biomarkers for thyroid carcinoma

    PubMed Central

    Wang, Xiliang; Zhang, Qing; Cai, Zhiming; Dai, Yifan; Mou, Lisha

    2017-01-01

    Thyroid carcinoma (THCA) is the most universal endocrine malignancy worldwide. Unfortunately, a limited number of large-scale analyses have been performed to identify biomarkers for THCA. Here, we conducted a meta-analysis using 505 THCA patients and 59 normal controls from The Cancer Genome Atlas. After identifying differentially expressed long non-coding RNA (lncRNA) and protein coding genes (PCG), we found vast difference in various lncRNA-PCG co-expressed pairs in THCA. A dysregulation network with scale-free topology was constructed. Four molecules (LA16c-380H5.2, RP11-203J24.8, MLF1 and SDC4) could potentially serve as diagnostic biomarkers of THCA with high sensitivity and specificity. We further represent a diagnostic panel with expression cutoff values. Our results demonstrate the potential application of those four molecules as novel independent biomarkers for THCA diagnosis. PMID:29340074

  15. Strategies and tools for whole genome alignments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Couronne, Olivier; Poliakov, Alexander; Bray, Nicolas

    2002-11-25

    The availability of the assembled mouse genome makespossible, for the first time, an alignment and comparison of two largevertebrate genomes. We have investigated different strategies ofalignment for the subsequent analysis of conservation of genomes that areeffective for different quality assemblies. These strategies were appliedto the comparison of the working draft of the human genome with the MouseGenome Sequencing Consortium assembly, as well as other intermediatemouse assemblies. Our methods are fast and the resulting alignmentsexhibit a high degree of sensitivity, covering more than 90 percent ofknown coding exons in the human genome. We have obtained such coveragewhile preserving specificity. With amore » view towards the end user, we havedeveloped a suite of tools and websites for automatically aligning, andsubsequently browsing and working with whole genome comparisons. Wedescribe the use of these tools to identify conserved non-coding regionsbetween the human and mouse genomes, some of which have not beenidentified by other methods.« less

  16. A crystallographic model for nickel base single crystal alloys

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Stouffer, D. C.

    1988-01-01

    The purpose of this research is to develop a tool for the mechanical analysis of nickel-base single-crystal superalloys, specifically Rene N4, used in gas turbine engine components. This objective is achieved by developing a rate-dependent anisotropic constitutive model and implementing it in a nonlinear three-dimensional finite-element code. The constitutive model is developed from metallurgical concepts utilizing a crystallographic approach. An extension of Schmid's law is combined with the Bodner-Partom equations to model the inelastic tension/compression asymmetry and orientation-dependence in octahedral slip. Schmid's law is used to approximate the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response and strain-rate sensitivity of the single-crystal superalloys. Methods for deriving the material constants from standard tests are also discussed. The model is implemented in a finite-element code, and the computed and experimental results are compared for several orientations and loading conditions.

  17. Comparison of SAND-II and FERRET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wootan, D.W.; Schmittroth, F.

    1981-01-01

    A comparison was made of the advantages and disadvantages of two codes, SAND-II and FERRET, for determining the neutron flux spectrum and uncertainty from experimental dosimeter measurements as anticipated in the FFTF Reactor Characterization Program. This comparison involved an examination of the methodology and the operational performance of each code. The merits of each code were identified with respect to theoretical basis, directness of method, solution uniqueness, subjective influences, and sensitivity to various input parameters.

  18. Mission Analysis for High Specific Impulse Deep Space Exploration

    NASA Technical Reports Server (NTRS)

    Adams, Robert B.; Polsgrove, Tara; Brady, Hugh J. (Technical Monitor)

    2002-01-01

    This paper describes trajectory calculations for high specific impulse engines. Specific impulses on the order of 10,000 to 100,000 sec are predicted in a variety of fusion powered propulsion systems. This paper and its companion paper seek to build on analyses in the literature to yield an analytical routine for determining time of flight and payload fraction to a predetermined destination. The companion paper will compare the results of this analysis to the trajectories determined by several trajectory codes. The major parameters that affect time of flight and payload fraction will be identified and their sensitivities quantified. A review of existing fusion propulsion concepts and their capabilities will also be tabulated.

  19. Validation of key behaviourally based mental health diagnoses in administrative data: suicide attempt, alcohol abuse, illicit drug abuse and tobacco use.

    PubMed

    Kim, Hyungjin Myra; Smith, Eric G; Stano, Claire M; Ganoczy, Dara; Zivin, Kara; Walters, Heather; Valenstein, Marcia

    2012-01-23

    Observational research frequently uses administrative codes for mental health or substance use diagnoses and for important behaviours such as suicide attempts. We sought to validate codes (International Classification of Diseases, 9th edition, clinical modification diagnostic and E-codes) entered in Veterans Health Administration administrative data for patients with depression versus a gold standard of electronic medical record text ("chart notation"). Three random samples of patients were selected, each stratified by geographic region, gender, and year of cohort entry, from a VHA depression treatment cohort from April 1, 1999 to September 30, 2004. The first sample was selected from patients who died by suicide, the second from patients who remained alive on the date of death of suicide cases, and the third from patients with a new start of a commonly used antidepressant medication. Four variables were assessed using administrative codes in the year prior to the index date: suicide attempt, alcohol abuse/dependence, drug abuse/dependence and tobacco use. Specificity was high (≥ 90%) for all four administrative codes, regardless of the sample. Sensitivity was ≤75% and was particularly low for suicide attempt (≤ 17%). Positive predictive values for alcohol dependence/abuse and tobacco use were high, but barely better than flipping a coin for illicit drug abuse/dependence. Sensitivity differed across the three samples, but was highest in the suicide death sample. Administrative data-based diagnoses among VHA records have high specificity, but low sensitivity. The accuracy level varies by different diagnosis and by different patient subgroup.

  20. Deep Constrained Siamese Hash Coding Network and Load-Balanced Locality-Sensitive Hashing for Near Duplicate Image Detection.

    PubMed

    Hu, Weiming; Fan, Yabo; Xing, Junliang; Sun, Liang; Cai, Zhaoquan; Maybank, Stephen

    2018-09-01

    We construct a new efficient near duplicate image detection method using a hierarchical hash code learning neural network and load-balanced locality-sensitive hashing (LSH) indexing. We propose a deep constrained siamese hash coding neural network combined with deep feature learning. Our neural network is able to extract effective features for near duplicate image detection. The extracted features are used to construct a LSH-based index. We propose a load-balanced LSH method to produce load-balanced buckets in the hashing process. The load-balanced LSH significantly reduces the query time. Based on the proposed load-balanced LSH, we design an effective and feasible algorithm for near duplicate image detection. Extensive experiments on three benchmark data sets demonstrate the effectiveness of our deep siamese hash encoding network and load-balanced LSH.

  1. Absorption of CO2 on Carbon-based Sensors: First-Principle Analysis

    NASA Astrophysics Data System (ADS)

    Tit, Nacir; Elezzi, Mohammed; Abdullah, Hasan; Bahlouli, Hocine; Yamani, Zain

    We present first-principle investigation of the adsorption properties of CO and CO2 molecules on both graphene and carbon nano-tubes (CNTs) in presence of metal catalysis, mainly iron (Fe). The relaxations were carried out using the self-consistent-charge density-functional tight-binding (SCC-DFTB) code in neglect of heat effects. The results show the following: (1) Defected graphene is found to have high sensitivity and high selectivity towards chemisorption of CO molecules and weak physisorption with CO2 molecules. (2) In case of CNTs, the iron ``Fe'' catalyst plays an essential role in capturing CO2 molecules. The Fe ad-atoms on the surface of CNT introduce huge density of states at Fermi level, but the capture of CO2 molecules would reduce that density and consequently reduce conductivity and increase sensitivity. Concerning the selectivity, we have studied the sensitivity versus various gas molecules (such as: O2, N2, H2, H2O, and CO). Furthermore, to assess the effect of catalysis on sensitivity, we have studied the sensitivity of other metal catalysts (such as: Ni, Co, Ti, and Sc). We found that CNT-Fe is highly sensitive and selective towards detection of CO and CO2 molecules. CNT being conductive or semiconducting does not matter much on the adsorption properties.

  2. Multidisciplinary optimization of controlled space structures with global sensitivity equations

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; James, Benjamin B.; Graves, Philip C.; Woodard, Stanley E.

    1991-01-01

    A new method for the preliminary design of controlled space structures is presented. The method coordinates standard finite element structural analysis, multivariable controls, and nonlinear programming codes and allows simultaneous optimization of the structures and control systems of a spacecraft. Global sensitivity equations are a key feature of this method. The preliminary design of a generic geostationary platform is used to demonstrate the multidisciplinary optimization method. Fifteen design variables are used to optimize truss member sizes and feedback gain values. The goal is to reduce the total mass of the structure and the vibration control system while satisfying constraints on vibration decay rate. Incorporating the nonnegligible mass of actuators causes an essential coupling between structural design variables and control design variables. The solution of the demonstration problem is an important step toward a comprehensive preliminary design capability for structures and control systems. Use of global sensitivity equations helps solve optimization problems that have a large number of design variables and a high degree of coupling between disciplines.

  3. Minerva exoplanet detection sensitivity from simulated observations

    NASA Astrophysics Data System (ADS)

    McCrady, Nate; Nava, C.

    2014-01-01

    Small rocky planets induce radial velocity signals that are difficult to detect in the presence of stellar noise sources of comparable or larger amplitude. Minerva is a dedicated, robotic observatory that will attain 1 meter per second precision to detect these rocky planets in the habitable zone around nearby stars. We present results of an ongoing project investigating Minerva’s planet detection sensitivity as a function of observational cadence, planet mass, and orbital parameters (period, eccentricity, and argument of periastron). Radial velocity data is simulated with realistic observing cadence, accounting for weather patterns at Mt. Hopkins, Arizona. Instrumental and stellar noise are added to the simulated observations, including effects of oscillation, jitter, starspots and rotation. We extract orbital parameters from the simulated RV data using the RVLIN code. A Monte Carlo analysis is used to explore the parameter space and evaluate planet detection completeness. Our results will inform the Minerva observing strategy by providing a quantitative measure of planet detection sensitivity as a function of orbital parameters and cadence.

  4. Novel Energetic Compounds Based on 3-Methyl-1,2,5-Oxadiazole 2-Oxide

    NASA Astrophysics Data System (ADS)

    Xu, Zhen; Yang, Hongwei; Cheng, Guangbin

    2018-01-01

    Two derivatives of 3-methyl-1,2,5-oxadiazole 2-oxide, (E) 4-methyl-1,2,5-oxadiazole-3-carboxaldehyde 5-oxide (2,4,6-trinitrophenyl)hydrazone (1) and 2,2,2-trinitroethyl 4-methyl-1,2,5-oxadiazole-3-carboxylate 5-oxide (2), were designed, synthesized, and fully characterized. The structures of the new compounds were confirmed by single-crystal X-ray analysis. Physicochemical and energetic properties including density, thermal stability, and sensitivity were investigated, and energetic properties (e.g., detonation velocities and detonation pressures) were calculated using EXPLO5 code. The results indicated that compound 1 exhibits positive heat of formation of 448.0 kJ mol-1 and acceptable sensitivities (IS: 20 J, FS: 280 N). In addition, compound 2 possesses low melting point (99.92°C), moderate decomposition temperature (183.67°C), good detonation performances (D: 8430 m s-1; P: 31.5 GPa), and lower sensitivities (IS: 18 J; FS: 220 N), which suggest 2 has the potential to be melt-cast explosive.

  5. Solar Variability and the Near-Earth Environment: Mining Enhanced Low Dose Rate Sensitivity Data From the Microelectronics and Photonics Test Bed Space Experiment

    NASA Technical Reports Server (NTRS)

    Turflinger, T.; Schmeichel, W.; Krieg, J.; Titus, J.; Campbell, A.; Reeves, M.; Marshall (P.); Hardage, Donna (Technical Monitor)

    2004-01-01

    This effort is a detailed analysis of existing microelectronics and photonics test bed satellite data from one experiment, the bipolar test board, looking to improve our understanding of the enhanced low dose rate sensitivity (ELDRS) phenomenon. Over the past several years, extensive total dose irradiations of bipolar devices have demonstrated that many of these devices exhibited ELDRS. In sensitive bipolar transistors, ELDRS produced enhanced degradation of base current, resulting in enhanced gain degradation at dose rates <0.1 rd(Si)/s compared to similar transistors irradiated at dose rates >1 rd(Si)/s. This Technical Publication provides updated information about the test devices, the in-flight experiment, and both flight-and ground-based observations. Flight data are presented for the past 5 yr of the mission. These data are compared to ground-based data taken on devices from the same date code lots. Information about temperature fluctuations, power shutdowns, and other variables encountered during the space flight are documented.

  6. Chemical and physical characterization of the first stages of protoplanetary disk formation

    NASA Astrophysics Data System (ADS)

    Hincelin, Ugo

    2012-12-01

    Low mass stars, like our Sun, are born from the collapse of a molecular cloud. The matter falls in the center of the cloud, creating a protoplanetary disk surrounding a protostar. Planets and other Solar System bodies will be formed in the disk. The chemical composition of the interstellar matter and its evolution during the formation of the disk are important to better understand the formation process of these objects. I studied the chemical and physical evolution of this matter, from the cloud to the disk, using the chemical gas-grain code Nautilus. A sensitivity study to some parameters of the code (such as elemental abundances and parameters of grain surface chemistry) has been done. More particularly, the updates of rate coefficients and branching ratios of the reactions of our chemical network showed their importance, such as on the abundances of some chemical species, and on the code sensitivity to others parameters. Several physical models of collapsing dense core have also been considered. The more complex and solid approach has been to interface our chemical code with the radiation-magneto-hydrodynamic model of stellar formation RAMSES, in order to model in three dimensions the physical and chemical evolution of a young disk formation. Our study showed that the disk keeps imprints of the past history of the matter, and so its chemical composition is sensitive to the initial conditions.

  7. Unsteady Analysis of Inlet-Compressor Acoustic Interactions Using Coupled 3-D and 1-D CFD Codes

    NASA Technical Reports Server (NTRS)

    Suresh, A.; Cole, G. L.

    2000-01-01

    It is well known that the dynamic response of a mixed compression supersonic inlet is very sensitive to the boundary condition imposed at the subsonic exit (engine face) of the inlet. In previous work, a 3-D computational fluid dynamics (CFD) inlet code (NPARC) was coupled at the engine face to a 3-D turbomachinery code (ADPAC) simulating an isolated rotor and the coupled simulation used to study the unsteady response of the inlet. The main problem with this approach is that the high fidelity turbomachinery simulation becomes prohibitively expensive as more stages are included in the simulation. In this paper, an alternative approach is explored, wherein the inlet code is coupled to a lesser fidelity 1-D transient compressor code (DYNTECC) which simulates the whole compressor. The specific application chosen for this evaluation is the collapsing bump experiment performed at the University of Cincinnati, wherein reflections of a large-amplitude acoustic pulse from a compressor were measured. The metrics for comparison are the pulse strength (time integral of the pulse amplitude) and wave form (shape). When the compressor is modeled by stage characteristics the computed strength is about ten percent greater than that for the experiment, but the wave shapes are in poor agreement. An alternate approach that uses a fixed rise in duct total pressure and temperature (so-called 'lossy' duct) to simulate a compressor gives good pulse shapes but the strength is about 30 percent low.

  8. Just in time? Using QR codes for multi-professional learning in clinical practice.

    PubMed

    Jamu, Joseph Tawanda; Lowi-Jones, Hannah; Mitchell, Colin

    2016-07-01

    Clinical guidelines and policies are widely available on the hospital intranet or from the internet, but can be difficult to access at the required time and place. Clinical staff with smartphones could use Quick Response (QR) codes for contemporaneous access to relevant information to support the Just in Time Learning (JIT-L) paradigm. There are several studies that advocate the use of smartphones to enhance learning amongst medical students and junior doctors in UK. However, these participants are already technologically orientated. There are limited studies that explore the use of smartphones in nursing practice. QR Codes were generated for each topic and positioned at relevant locations on a medical ward. Support and training were provided for staff. Website analytics and semi-structured interviews were performed to evaluate the efficacy, acceptability and feasibility of using QR codes to facilitate Just in Time learning. Use was intermittently high but not sustained. Thematic analysis of interviews revealed a positive assessment of the Just in Time learning paradigm and context-sensitive clinical information. However, there were notable barriers to acceptance, including usability of QR codes and appropriateness of smartphone use in a clinical environment. The use of Just in Time learning for education and reference may be beneficial to healthcare professionals. However, alternative methods of access for less technologically literate users and a change in culture of mobile device use in clinical areas may be needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. [Footwear according to the "business dress code", and the health condition of women's feet--computer-assisted holistic evaluation].

    PubMed

    Lorkowski, Jacek; Mrzygłód, Mirosław; Kotela, Ireneusz; Kiełbasiewicz-Lorkowska, Ewa; Teul, Iwona

    2013-01-01

    According to the verdict of the Supreme Court in 2005, an employer may dismiss an employee if their conduct (including dress) exposes the employer to losses or threatens his interests. The aim of the study was a holistic assessment of the pleiotropic effects of high-heeled pointed shoes on the health condition of women's feet, wearing them at work, in accordance with the existing rules of the "business dress code". A holistic multidisciplinary analysis was performed. It takes into account: 1) women employees of banks and other large corporations (82 persons); 2) 2D FEM computer model developed by the authors of foot deformed by pointed high-heeled shoes; 3) web site found after entering the code "business dress code". Over 60% of women in the office wore high-heeled shoes. The following has been found among people walking to work in high heels: 1) reduction in the quality of life in about 70% of cases, through periodic occurrence of pain and reduction of functional capacity of the feet; 2) increase in the pressure on the plantar side of the forefoot at least twice; 3) the continued effects the forces deforming the forefoot. 1. An evolutionary change of "dress code" shoes is necessary in order to lead to a reduction in non-physiological overload of feet and the consequence of their disability. 2. These changes are particularly urgent in patients with so-called "sensitive foot".

  10. Sensitivity analysis of the Gupta and Park chemical models on the heat flux by DSMC and CFD codes

    NASA Astrophysics Data System (ADS)

    Morsa, Luigi; Festa, Giandomenico; Zuppardi, Gennaro

    2012-11-01

    The present study is the logical continuation of a former paper by the first author in which the influence of the chemical models by Gupta and by Park on the computation of heat flux on the Orion and EXPERT capsules was evaluated. Tests were carried out by the direct simulation Monte Carlo code DS2V and by the computational fluiddynamic (CFD) code H3NS. DS2V implements the Gupta model, while H3NS implements the Park model. In order to compare the effects of the chemical models, the Park model was implemented also in DS2V. The results showed that DS2V and H3NS compute a different composition both in the flow field and on the surface, even using the same chemical model (Park). Furthermore DS2V computes, by the two chemical models, different compositions in the flow field but the same composition on the surface, therefore the same heat flux. In the present study, in order to evaluate the influence of these chemical models also in a CFD code, the Gupta and the Park models have been implemented in FLUENT. Tests by DS2V and by FLUENT, have been carried out for the EXPERT capsule at the altitude of 70 km and with velocity of 5000 m/s. The capsule experiences a hypersonic, continuum low density regime. Due to the energy level of the flow, the vibration equation, lacking in the original version of FLUENT, has been implemented. The results of the heat flux computation verify that FLUENT is quite sensitive to the Gupta and to the Park chemical models. In fact, at the stagnation point, the percentage difference between the models is about 13%. On the opposite the DS2V results by the two models are practically equivalent.

  11. Unique proteomic signature for radiation sensitive patients; a comparative study between normo-sensitive and radiation sensitive breast cancer patients.

    PubMed

    Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak

    2015-06-01

    Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Experience of implementing a National pre-hospital Code Red bleeding protocol in Scotland.

    PubMed

    Reed, Matthew J; Glover, Alison; Byrne, Lauren; Donald, Michael; McMahon, Niall; Hughes, Neil; Littlewood, Nicola K; Garrett, Justin; Innes, Catherine; McGarvey, Margaret; Hazra, Eleanor; Rawlinson, P Sam M

    2017-01-01

    The Scottish Transfusion and Laboratory Support in Trauma Group (TLSTG) have introduced a unified National pre-hospital Code Red protocol. This paper reports the results of a study aiming to establish whether current pre-hospital Code Red activation criteria for trauma patients successfully predict need for in hospital transfusion or haemorrhagic death, the current admission coagulation profile and Concentrated Red Cell (CRC): Fresh Frozen Plasma (FFP) ratio being used, and whether use of the protocol leads to increased blood component discards? Prospective cohort study. Clinical and transfusion leads for each of Scotland's pre-hospital services and their receiving hospitals agreed to enter data into the study for all trauma patients for whom a pre-hospital Code Red was activated. Outcome data collected included survival 24h after Code Red activation, survival to hospital discharge, death in the Emergency Department and death in hospital. Between June 1st 2013 and October 31st 2015 there were 53 pre-hospital Code Red activations. Median Injury Severity Score (ISS) was 24 (IQR 14-37) and mortality 38%. 16 patients received pre-hospital blood. The pre-hospital Code Red protocol was sensitive for predicting transfusion or haemorrhagic death (89%). Sensitivity, specificity, positive and negative predictive values of the pre-hospital SBP <90mmHg component were 63%, 33%, 86% and 12%. 19% had an admission prothrombin time >14s and 27% had a fibrinogen <1.5g/L. CRC: FFP ratios did not drop to below 2:1 until 150min after arrival in the ED. 16 red cell units, 33 FFP and 6 platelets were discarded. This was not significantly increased compared to historical data. A National pre-hospital Code Red protocol is sensitive for predicting transfusion requirement in bleeding trauma patients and does not lead to increased blood component discards. A significant number of patients are coagulopathic and there is a need to improve CRC: FFP ratios and time to transfusion support especially FFP provision. Training clinicians to activate pre-hospital Code Red earlier during the pre-hospital phase may give blood bank more time to thaw and prepare FFP and may improve FFP administration times and ratios so long as components are used upon their availability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database

    PubMed Central

    2010-01-01

    Background In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. Methods SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). Results For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease. The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Conclusions Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found. PMID:20416069

  14. Validity of registration of ICD codes and prescriptions in a research database in Swedish primary care: a cross-sectional study in Skaraborg primary care database.

    PubMed

    Hjerpe, Per; Merlo, Juan; Ohlsson, Henrik; Bengtsson Boström, Kristina; Lindblad, Ulf

    2010-04-23

    In recent years, several primary care databases recording information from computerized medical records have been established and used for quality assessment of medical care and research. However, to be useful for research purposes, the data generated routinely from every day practice require registration of high quality. In this study we aimed to investigate (i) the frequency and validity of ICD code and drug prescription registration in the new Skaraborg primary care database (SPCD) and (ii) to investigate the sources of variation in this registration. SPCD contains anonymous electronic medical records (ProfDoc III) automatically retrieved from all 24 public health care centres (HCC) in Skaraborg, Sweden. The frequencies of ICD code registration for the selected diagnoses diabetes mellitus, hypertension and chronic cardiovascular disease and the relevant drug prescriptions in the time period between May 2002 and October 2003 were analysed. The validity of data registration in the SPCD was assessed in a random sample of 50 medical records from each HCC (n = 1200 records) using the medical record text as gold standard. The variance of ICD code registration was studied with multi-level logistic regression analysis and expressed as median odds ratio (MOR). For diabetes mellitus and hypertension ICD codes were registered in 80-90% of cases, while for congestive heart failure and ischemic heart disease ICD codes were registered more seldom (60-70%). Drug prescription registration was overall high (88%). A correlation between the frequency of ICD coded visits and the sensitivity of the ICD code registration was found for hypertension and congestive heart failure but not for diabetes or ischemic heart disease.The frequency of ICD code registration varied from 42 to 90% between HCCs, and the greatest variation was found at the physician level (MORPHYSICIAN = 4.2 and MORHCC = 2.3). Since the frequency of ICD code registration varies between different diagnoses, each diagnosis must be separately validated. Improved frequency and quality of ICD code registration might be achieved by interventions directed towards the physicians where the greatest amount of variation was found.

  15. Validity of the coding for herpes simplex encephalitis in the Danish National Patient Registry.

    PubMed

    Jørgensen, Laura Krogh; Dalgaard, Lars Skov; Østergaard, Lars Jørgen; Andersen, Nanna Skaarup; Nørgaard, Mette; Mogensen, Trine Hyrup

    2016-01-01

    Large health care databases are a valuable source of infectious disease epidemiology if diagnoses are valid. The aim of this study was to investigate the accuracy of the recorded diagnosis coding of herpes simplex encephalitis (HSE) in the Danish National Patient Registry (DNPR). The DNPR was used to identify all hospitalized patients, aged ≥15 years, with a first-time diagnosis of HSE according to the International Classification of Diseases, tenth revision (ICD-10), from 2004 to 2014. To validate the coding of HSE, we collected data from the Danish Microbiology Database, from departments of clinical microbiology, and from patient medical records. Cases were classified as confirmed, probable, or no evidence of HSE. We estimated the positive predictive value (PPV) of the HSE diagnosis coding stratified by diagnosis type, study period, and department type. Furthermore, we estimated the proportion of HSE cases coded with nonspecific ICD-10 codes of viral encephalitis and also the sensitivity of the HSE diagnosis coding. We were able to validate 398 (94.3%) of the 422 HSE diagnoses identified via the DNPR. Hereof, 202 (50.8%) were classified as confirmed cases and 29 (7.3%) as probable cases providing an overall PPV of 58.0% (95% confidence interval [CI]: 53.0-62.9). For "Encephalitis due to herpes simplex virus" (ICD-10 code B00.4), the PPV was 56.6% (95% CI: 51.1-62.0). Similarly, the PPV for "Meningoencephalitis due to herpes simplex virus" (ICD-10 code B00.4A) was 56.8% (95% CI: 39.5-72.9). "Herpes viral encephalitis" (ICD-10 code G05.1E) had a PPV of 75.9% (95% CI: 56.5-89.7), thereby representing the highest PPV. The estimated sensitivity was 95.5%. The PPVs of the ICD-10 diagnosis coding for adult HSE in the DNPR were relatively low. Hence, the DNPR should be used with caution when studying patients with encephalitis caused by herpes simplex virus.

  16. Final Report A Multi-Language Environment For Programmable Code Optimization and Empirical Tuning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yi, Qing; Whaley, Richard Clint; Qasem, Apan

    This report summarizes our effort and results of building an integrated optimization environment to effectively combine the programmable control and the empirical tuning of source-to-source compiler optimizations within the framework of multiple existing languages, specifically C, C++, and Fortran. The environment contains two main components: the ROSE analysis engine, which is based on the ROSE C/C++/Fortran2003 source-to-source compiler developed by Co-PI Dr.Quinlan et. al at DOE/LLNL, and the POET transformation engine, which is based on an interpreted program transformation language developed by Dr. Yi at University of Texas at San Antonio (UTSA). The ROSE analysis engine performs advanced compiler analysis,more » identifies profitable code transformations, and then produces output in POET, a language designed to provide programmable control of compiler optimizations to application developers and to support the parameterization of architecture-sensitive optimizations so that their configurations can be empirically tuned later. This POET output can then be ported to different machines together with the user application, where a POET-based search engine empirically reconfigures the parameterized optimizations until satisfactory performance is found. Computational specialists can write POET scripts to directly control the optimization of their code. Application developers can interact with ROSE to obtain optimization feedback as well as provide domain-specific knowledge and high-level optimization strategies. The optimization environment is expected to support different levels of automation and programmer intervention, from fully-automated tuning to semi-automated development and to manual programmable control.« less

  17. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  18. Uncertainty and sensitivity analysis of fission gas behavior in engineering-scale fuel modeling

    DOE PAGES

    Pastore, Giovanni; Swiler, L. P.; Hales, Jason D.; ...

    2014-10-12

    The role of uncertainties in fission gas behavior calculations as part of engineering-scale nuclear fuel modeling is investigated using the BISON fuel performance code and a recently implemented physics-based model for the coupled fission gas release and swelling. Through the integration of BISON with the DAKOTA software, a sensitivity analysis of the results to selected model parameters is carried out based on UO2 single-pellet simulations covering different power regimes. The parameters are varied within ranges representative of the relative uncertainties and consistent with the information from the open literature. The study leads to an initial quantitative assessment of the uncertaintymore » in fission gas behavior modeling with the parameter characterization presently available. Also, the relative importance of the single parameters is evaluated. Moreover, a sensitivity analysis is carried out based on simulations of a fuel rod irradiation experiment, pointing out a significant impact of the considered uncertainties on the calculated fission gas release and cladding diametral strain. The results of the study indicate that the commonly accepted deviation between calculated and measured fission gas release by a factor of 2 approximately corresponds to the inherent modeling uncertainty at high fission gas release. Nevertheless, higher deviations may be expected for values around 10% and lower. Implications are discussed in terms of directions of research for the improved modeling of fission gas behavior for engineering purposes.« less

  19. Methodology for Sensitivity Analysis, Approximate Analysis, and Design Optimization in CFD for Multidisciplinary Applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1996-01-01

    An incremental iterative formulation together with the well-known spatially split approximate-factorization algorithm, is presented for solving the large, sparse systems of linear equations that are associated with aerodynamic sensitivity analysis. This formulation is also known as the 'delta' or 'correction' form. For the smaller two dimensional problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. However, iterative methods are needed for larger two-dimensional and three dimensional applications because direct methods require more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioned coefficient matrix; this problem is overcome when these equations are cast in the incremental form. The methodology is successfully implemented and tested using an upwind cell-centered finite-volume formulation applied in two dimensions to the thin-layer Navier-Stokes equations for external flow over an airfoil. In three dimensions this methodology is demonstrated with a marching-solution algorithm for the Euler equations to calculate supersonic flow over the High-Speed Civil Transport configuration (HSCT 24E). The sensitivity derivatives obtained with the incremental iterative method from a marching Euler code are used in a design-improvement study of the HSCT configuration that involves thickness. camber, and planform design variables.

  20. Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) processing speed scores as measures of noncredible responding: The third generation of embedded performance validity indicators.

    PubMed

    Erdodi, Laszlo A; Abeare, Christopher A; Lichtenstein, Jonathan D; Tyson, Bradley T; Kucharski, Brittany; Zuccato, Brandon G; Roth, Robert M

    2017-02-01

    Research suggests that select processing speed measures can also serve as embedded validity indicators (EVIs). The present study examined the diagnostic utility of Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) subtests as EVIs in a mixed clinical sample of 205 patients medically referred for neuropsychological assessment (53.3% female, mean age = 45.1). Classification accuracy was calculated against 3 composite measures of performance validity as criterion variables. A PSI ≤79 produced a good combination of sensitivity (.23-.56) and specificity (.92-.98). A Coding scaled score ≤5 resulted in good specificity (.94-1.00), but low and variable sensitivity (.04-.28). A Symbol Search scaled score ≤6 achieved a good balance between sensitivity (.38-.64) and specificity (.88-.93). A Coding-Symbol Search scaled score difference ≥5 produced adequate specificity (.89-.91) but consistently low sensitivity (.08-.12). A 2-tailed cutoff on the Coding/Symbol Search raw score ratio (≤1.41 or ≥3.57) produced acceptable specificity (.87-.93), but low sensitivity (.15-.24). Failing ≥2 of these EVIs produced variable specificity (.81-.93) and sensitivity (.31-.59). Failing ≥3 of these EVIs stabilized specificity (.89-.94) at a small cost to sensitivity (.23-.53). Results suggest that processing speed based EVIs have the potential to provide a cost-effective and expedient method for evaluating the validity of cognitive data. Given their generally low and variable sensitivity, however, they should not be used in isolation to determine the credibility of a given response set. They also produced unacceptably high rates of false positive errors in patients with moderate-to-severe head injury. Combining evidence from multiple EVIs has the potential to improve overall classification accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Toward a Neuroscientific Understanding of Play: A Dimensional Coding Framework for Analyzing Infant–Adult Play Patterns

    PubMed Central

    Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria

    2018-01-01

    Play during early life is a ubiquitous activity, and an individual’s propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play – whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora (N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult–infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult–infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions. PMID:29618994

  2. Toward a Neuroscientific Understanding of Play: A Dimensional Coding Framework for Analyzing Infant-Adult Play Patterns.

    PubMed

    Neale, Dave; Clackson, Kaili; Georgieva, Stanimira; Dedetas, Hatice; Scarpate, Melissa; Wass, Sam; Leong, Victoria

    2018-01-01

    Play during early life is a ubiquitous activity, and an individual's propensity for play is positively related to cognitive development and emotional well-being. Play behavior (which may be solitary or shared with a social partner) is diverse and multi-faceted. A challenge for current research is to converge on a common definition and measurement system for play - whether examined at a behavioral, cognitive or neurological level. Combining these different approaches in a multimodal analysis could yield significant advances in understanding the neurocognitive mechanisms of play, and provide the basis for developing biologically grounded play models. However, there is currently no integrated framework for conducting a multimodal analysis of play that spans brain, cognition and behavior. The proposed coding framework uses grounded and observable behaviors along three dimensions (sensorimotor, cognitive and socio-emotional), to compute inferences about playful behavior in a social context, and related social interactional states. Here, we illustrate the sensitivity and utility of the proposed coding framework using two contrasting dyadic corpora ( N = 5) of mother-infant object-oriented interactions during experimental conditions that were either non-conducive (Condition 1) or conducive (Condition 2) to the emergence of playful behavior. We find that the framework accurately identifies the modal form of social interaction as being either non-playful (Condition 1) or playful (Condition 2), and further provides useful insights about differences in the quality of social interaction and temporal synchronicity within the dyad. It is intended that this fine-grained coding of play behavior will be easily assimilated with, and inform, future analysis of neural data that is also collected during adult-infant play. In conclusion, here, we present a novel framework for analyzing the continuous time-evolution of adult-infant play patterns, underpinned by biologically informed state coding along sensorimotor, cognitive and socio-emotional dimensions. We expect that the proposed framework will have wide utility amongst researchers wishing to employ an integrated, multimodal approach to the study of play, and lead toward a greater understanding of the neuroscientific basis of play. It may also yield insights into a new biologically grounded taxonomy of play interactions.

  3. Quantitative profiling of O-glycans by electrospray ionization- and matrix-assisted laser desorption ionization-time-of-flight-mass spectrometry after in-gel derivatization with isotope-coded 1-phenyl-3-methyl-5-pyrazolone.

    PubMed

    Sić, Siniša; Maier, Norbert M; Rizzi, Andreas M

    2016-09-07

    The potential and benefits of isotope-coded labeling in the context of MS-based glycan profiling are evaluated focusing on the analysis of O-glycans. For this purpose, a derivatization strategy using d0/d5-1-phenyl-3-methyl-5-pyrazolone (PMP) is employed, allowing O-glycan release and derivatization to be achieved in one single step. The paper demonstrates that this release and derivatization reaction can be carried out also in-gel with only marginal loss in sensitivity compared to in-solution derivatization. Such an effective in-gel reaction allows one to extend this release/labeling method also to glycoprotein/glycoform samples pre-separated by gel-electrophoresis without the need of extracting the proteins/digested peptides from the gel. With highly O-glycosylated proteins (e.g. mucins) LODs in the range of 0.4 μg glycoprotein (100 fmol) loaded onto the electrophoresis gel can be attained, with minor glycosylated proteins (like IgAs, FVII, FIX) the LODs were in the range of 80-100 μg (250 pmol-1.5 nmol) glycoprotein loaded onto the gel. As second aspect, the potential of isotope coded labeling as internal standardization strategy for the reliable determination of quantitative glycan profiles via MALDI-MS is investigated. Towards this goal, a number of established and emerging MALDI matrices were tested for PMP-glycan quantitation, and their performance is compared with that of ESI-based measurements. The crystalline matrix 2,6-dihydroxyacetophenone (DHAP) and the ionic liquid matrix N,N-diisopropyl-ethyl-ammonium 2,4,6-trihydroxyacetophenone (DIEA-THAP) showed potential for MALDI-based quantitation of PMP-labeled O-glycans. We also provide a comprehensive overview on the performance of MS-based glycan quantitation approaches by comparing sensitivity, LOD, accuracy and repeatability data obtained with RP-HPLC-ESI-MS, stand-alone nano-ESI-MS with a spray-nozzle chip, and MALDI-MS. Finally, the suitability of the isotope-coded PMP labeling strategy for O-glycan profiling of biological important proteins is demonstrated by comparative analysis of IgA immunoglobulins and two coagulation factors. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Performance and Limitations of Administrative Data in the Identification of AKI

    PubMed Central

    Waikar, Sushrut S.; MacMahon, Blaithin; Whelton, Seamus; Ballew, Shoshana H.; Coresh, Josef

    2014-01-01

    Background and objectives Billing codes are frequently used to identify AKI events in epidemiologic research. The goals of this study were to validate billing code–identified AKI against the current AKI consensus definition and to ascertain whether sensitivity and specificity vary by patient characteristic or over time. Design, setting, participants, & measurements The study population included 10,056 Atherosclerosis Risk in Communities study participants hospitalized between 1996 and 2008. Billing code–identified AKI was compared with the 2012 Kidney Disease Improving Global Outcomes (KDIGO) creatinine-based criteria (AKIcr) and an approximation of the 2012 KDIGO creatinine- and urine output–based criteria (AKIcr_uop) in a subset with available outpatient data. Sensitivity and specificity of billing code–identified AKI were evaluated over time and according to patient age, race, sex, diabetes status, and CKD status in 546 charts selected for review, with estimates adjusted for sampling technique. Results A total of 34,179 hospitalizations were identified; 1353 had a billing code for AKI. The sensitivity of billing code–identified AKI was 17.2% (95% confidence interval [95% CI], 13.2% to 21.2%) compared with AKIcr (n=1970 hospitalizations) and 11.7% (95% CI, 8.8% to 14.5%) compared with AKIcr_uop (n=1839 hospitalizations). Specificity was >98% in both cases. Sensitivity was significantly higher in the more recent time period (2002–2008) and among participants aged 65 years and older. Billing code–identified AKI captured a more severe spectrum of disease than did AKIcr and AKIcr_uop, with a larger proportion of patients with stage 3 AKI (34.9%, 19.7%, and 11.5%, respectively) and higher in-hospital mortality (41.2%, 18.7%, and 12.8%, respectively). Conclusions The use of billing codes to identify AKI has low sensitivity compared with the current KDIGO consensus definition, especially when the urine output criterion is included, and results in the identification of a more severe phenotype. Epidemiologic studies using billing codes may benefit from a high specificity, but the variation in sensitivity may result in bias, particularly when trends over time are the outcome of interest. PMID:24458075

  5. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  7. Genome-wide identification of conserved intronic non-coding sequences using a Bayesian segmentation approach.

    PubMed

    Algama, Manjula; Tasker, Edward; Williams, Caitlin; Parslow, Adam C; Bryson-Richardson, Robert J; Keith, Jonathan M

    2017-03-27

    Computational identification of non-coding RNAs (ncRNAs) is a challenging problem. We describe a genome-wide analysis using Bayesian segmentation to identify intronic elements highly conserved between three evolutionarily distant vertebrate species: human, mouse and zebrafish. We investigate the extent to which these elements include ncRNAs (or conserved domains of ncRNAs) and regulatory sequences. We identified 655 deeply conserved intronic sequences in a genome-wide analysis. We also performed a pathway-focussed analysis on genes involved in muscle development, detecting 27 intronic elements, of which 22 were not detected in the genome-wide analysis. At least 87% of the genome-wide and 70% of the pathway-focussed elements have existing annotations indicative of conserved RNA secondary structure. The expression of 26 of the pathway-focused elements was examined using RT-PCR, providing confirmation that they include expressed ncRNAs. Consistent with previous studies, these elements are significantly over-represented in the introns of transcription factors. This study demonstrates a novel, highly effective, Bayesian approach to identifying conserved non-coding sequences. Our results complement previous findings that these sequences are enriched in transcription factors. However, in contrast to previous studies which suggest the majority of conserved sequences are regulatory factor binding sites, the majority of conserved sequences identified using our approach contain evidence of conserved RNA secondary structures, and our laboratory results suggest most are expressed. Functional roles at DNA and RNA levels are not mutually exclusive, and many of our elements possess evidence of both. Moreover, ncRNAs play roles in transcriptional and post-transcriptional regulation, and this may contribute to the over-representation of these elements in introns of transcription factors. We attribute the higher sensitivity of the pathway-focussed analysis compared to the genome-wide analysis to improved alignment quality, suggesting that enhanced genomic alignments may reveal many more conserved intronic sequences.

  8. First status report on regional ground-water flow modeling for the Paradox Basin, Utah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, R.W.

    1984-05-01

    Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less

  9. Sensitivity and specificity of obesity diagnosis in pediatric ambulatory care in the United States.

    PubMed

    Walsh, Carolyn O; Milliren, Carly E; Feldman, Henry A; Taveras, Elsie M

    2013-09-01

    We examined the sensitivity and specificity of an obesity diagnosis in a nationally representative sample of pediatric outpatient visits. We used the 2005 to 2009 National Ambulatory Medical Care and National Hospital Ambulatory Medical Care surveys. We included visits with children 2 to 18 years, yielding a sample of 48 145 database visits. We determined 3 methods of identifying obesity: documented body mass index (BMI) ≥95th percentile; International Classification of Diseases, Ninth Revision (ICD-9) code; and positive answer to the question, "Does the patient now have obesity?" Using BMI as the gold standard, we calculated the sensitivity and specificity of a clinical obesity diagnosis. Among the 19.5% of children who were obese by BMI, 7.0% had an ICD-9 code and 15.2% had a positive response to questioning. The sensitivity of an obesity diagnosis was 15.4%, and the specificity was 99.2%. The sensitivity of the obesity diagnosis in pediatric ambulatory visits is low. Efforts are needed to increase identification of obese children.

  10. Temperamental precursors of infant attachment with mothers and fathers☆

    PubMed Central

    Planalp, Elizabeth M.; Braungart-Rieker, Julia M.

    2013-01-01

    The degree to which parent sensitivity and infant temperament distinguish attachment classification was examined. Multilevel modeling was used to assess the effect of parent sensitivity and infant temperament on infant–mother and infant–father attachment. Data were collected from mothers, fathers, and their infants (N = 135) when the infant was 3-, 5-, 7-, 12-, and 14-months old. Temperament was measured using the Infant Behavior Questionnaire-Revised (Gartstein & Rothbart, 2003); parent sensitivity was coded during the Still Face Paradigm (Tronick, Als, Adamson, Wise, & Brazelton, 1978); attachment was coded using the Strange Situation (Ainsworth, Blehar, Waters, & Wall, 1978). Results indicate that mothers and fathers were less sensitive with insecure-avoidant infants. Whereas only one difference was found for infant–mother attachment groups and temperament, five significant differences emerged for infant–father attachment groups, with the majority involving insecure-ambivalent attachment. Infants classified as ambivalent with fathers were higher in perceptual sensitivity and cuddliness and these infants also showed a greater increase in low-intensity pleasure over time compared with other infants. Results indicate the importance of both parent sensitivity and infant temperament, though operating in somewhat different ways, in the development of the infant–mother and infant–father attachment relationship. PMID:24103401

  11. Moderate sensitivity and high specificity of emergency department administrative data for transient ischemic attacks.

    PubMed

    Yu, Amy Y X; Quan, Hude; McRae, Andrew; Wagner, Gabrielle O; Hill, Michael D; Coutts, Shelagh B

    2017-09-18

    Validation of administrative data case definitions is key for accurate passive surveillance of disease. Transient ischemic attack (TIA) is a condition primarily managed in the emergency department. However, prior validation studies have focused on data after inpatient hospitalization. We aimed to determine the validity of the Canadian 10th International Classification of Diseases (ICD-10-CA) codes for TIA in the national ambulatory administrative database. We performed a diagnostic accuracy study of four ICD-10-CA case definition algorithms for TIA in the emergency department setting. The study population was obtained from two ongoing studies on the diagnosis of TIA and minor stroke versus stroke mimic using serum biomarkers and neuroimaging. Two reference standards were used 1) the emergency department clinical diagnosis determined by chart abstractors and 2) the 90-day final diagnosis, both obtained by stroke neurologists, to calculate the sensitivity, specificity, positive and negative predictive values (PPV and NPV) of the ICD-10-CA algorithms for TIA. Among 417 patients, emergency department adjudication showed 163 (39.1%) TIA, 155 (37.2%) ischemic strokes, and 99 (23.7%) stroke mimics. The most restrictive algorithm, defined as a TIA code in the main position had the lowest sensitivity (36.8%), but highest specificity (92.5%) and PPV (76.0%). The most inclusive algorithm, defined as a TIA code in any position with and without query prefix had the highest sensitivity (63.8%), but lowest specificity (81.5%) and PPV (68.9%). Sensitivity, specificity, PPV, and NPV were overall lower when using the 90-day diagnosis as reference standard. Emergency department administrative data reflect diagnosis of suspected TIA with high specificity, but underestimate the burden of disease. Future studies are necessary to understand the reasons for the low to moderate sensitivity.

  12. Method and apparatus for ultra-high-sensitivity, incremental and absolute optical encoding

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B. (Inventor)

    1999-01-01

    An absolute optical linear or rotary encoder which encodes the motion of an object (3) with increased resolution and encoding range and decreased sensitivity to damage to the scale includes a scale (5), which moves with the object and is illuminated by a light source (11). The scale carries a pattern (9) which is imaged by a microscope optical system (13) on a CCD array (17) in a camera head (15). The pattern includes both fiducial markings (31) which are identical for each period of the pattern and code areas (33) which include binary codings of numbers identifying the individual periods of the pattern. The image of the pattern formed on the CCD array is analyzed by an image processor (23) to locate the fiducial marking, decode the information encoded in the code area, and thereby determine the position of the object.

  13. A Simple Secure Hash Function Scheme Using Multiple Chaotic Maps

    NASA Astrophysics Data System (ADS)

    Ahmad, Musheer; Khurana, Shruti; Singh, Sushmita; AlSharari, Hamed D.

    2017-06-01

    The chaotic maps posses high parameter sensitivity, random-like behavior and one-way computations, which favor the construction of cryptographic hash functions. In this paper, we propose to present a novel hash function scheme which uses multiple chaotic maps to generate efficient variable-sized hash functions. The message is divided into four parts, each part is processed by a different 1D chaotic map unit yielding intermediate hash code. The four codes are concatenated to two blocks, then each block is processed through 2D chaotic map unit separately. The final hash value is generated by combining the two partial hash codes. The simulation analyses such as distribution of hashes, statistical properties of confusion and diffusion, message and key sensitivity, collision resistance and flexibility are performed. The results reveal that the proposed anticipated hash scheme is simple, efficient and holds comparable capabilities when compared with some recent chaos-based hash algorithms.

  14. Highly-sensitive microRNA detection based on bio-bar-code assay and catalytic hairpin assembly two-stage amplification.

    PubMed

    Tang, Songsong; Gu, Yuan; Lu, Huiting; Dong, Haifeng; Zhang, Kai; Dai, Wenhao; Meng, Xiangdan; Yang, Fan; Zhang, Xueji

    2018-04-03

    Herein, a highly-sensitive microRNA (miRNA) detection strategy was developed by combining bio-bar-code assay (BBA) with catalytic hairpin assembly (CHA). In the proposed system, two nanoprobes of magnetic nanoparticles functionalized with DNA probes (MNPs-DNA) and gold nanoparticles with numerous barcode DNA (AuNPs-DNA) were designed. In the presence of target miRNA, the MNP-DNA and AuNP-DNA hybridized with target miRNA to form a "sandwich" structure. After "sandwich" structures were separated from the solution by the magnetic field and dehybridized by high temperature, the barcode DNA sequences were released by dissolving AuNPs. The released barcode DNA sequences triggered the toehold strand displacement assembly of two hairpin probes, leading to recycle of barcode DNA sequences and producing numerous fluorescent CHA products for miRNA detection. Under the optimal experimental conditions, the proposed two-stage amplification system could sensitively detect target miRNA ranging from 10 pM to 10 aM with a limit of detection (LOD) down to 97.9 zM. It displayed good capability to discriminate single base and three bases mismatch due to the unique sandwich structure. Notably, it presented good feasibility for selective multiplexed detection of various combinations of synthetic miRNA sequences and miRNAs extracted from different cell lysates, which were in agreement with the traditional polymerase chain reaction analysis. The two-stage amplification strategy may be significant implication in the biological detection and clinical diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Development of a dynamic coupled hydro-geomechanical code and its application to induced seismicity

    NASA Astrophysics Data System (ADS)

    Miah, Md Mamun

    This research describes the importance of a hydro-geomechanical coupling in the geologic sub-surface environment from fluid injection at geothermal plants, large-scale geological CO2 sequestration for climate mitigation, enhanced oil recovery, and hydraulic fracturing during wells construction in the oil and gas industries. A sequential computational code is developed to capture the multiphysics interaction behavior by linking a flow simulation code TOUGH2 and a geomechanics modeling code PyLith. Numerical formulation of each code is discussed to demonstrate their modeling capabilities. The computational framework involves sequential coupling, and solution of two sub-problems- fluid flow through fractured and porous media and reservoir geomechanics. For each time step of flow calculation, pressure field is passed to the geomechanics code to compute effective stress field and fault slips. A simplified permeability model is implemented in the code that accounts for the permeability of porous and saturated rocks subject to confining stresses. The accuracy of the TOUGH-PyLith coupled simulator is tested by simulating Terzaghi's 1D consolidation problem. The modeling capability of coupled poroelasticity is validated by benchmarking it against Mandel's problem. The code is used to simulate both quasi-static and dynamic earthquake nucleation and slip distribution on a fault from the combined effect of far field tectonic loading and fluid injection by using an appropriate fault constitutive friction model. Results from the quasi-static induced earthquake simulations show a delayed response in earthquake nucleation. This is attributed to the increased total stress in the domain and not accounting for pressure on the fault. However, this issue is resolved in the final chapter in simulating a single event earthquake dynamic rupture. Simulation results show that fluid pressure has a positive effect on slip nucleation and subsequent crack propagation. This is confirmed by running a sensitivity analysis that shows an increase in injection well distance results in delayed slip nucleation and rupture propagation on the fault.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Epiney, A.; Canepa, S.; Zerkak, O.

    The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less

  17. SWIFT Code Assessment for Two Similar Transonic Compressors

    NASA Technical Reports Server (NTRS)

    Chima, Rodrick V.

    2009-01-01

    One goal of the NASA Fundamental Aeronautics Program is the assessment of computational fluid dynamic (CFD) codes used for the design and analysis of many aerospace systems. This paper describes the assessment of the SWIFT turbomachinery analysis code for two similar transonic compressors, NASA rotor 37 and stage 35. The two rotors have identical blade profiles on the front, transonic half of the blade but rotor 37 has more camber aft of the shock. Thus the two rotors have the same shock structure and choking flow but rotor 37 produces a higher pressure ratio. The two compressors and experimental data are described here briefly. Rotor 37 was also used for test cases organized by ASME, IGTI, and AGARD in 1994-1998. Most of the participating codes over predicted pressure and temperature ratios, and failed to predict certain features of the downstream flowfield. Since then the AUSM+ upwind scheme and the k- turbulence model have been added to SWIFT. In this work the new capabilities were assessed for the two compressors. Comparisons were made with overall performance maps and spanwise profiles of several aerodynamic parameters. The results for rotor 37 were in much better agreement with the experimental data than the original blind test case results although there were still some discrepancies. The results for stage 35 were in very good agreement with the data. The results for rotor 37 were very sensitive to turbulence model parameters but the results for stage 35 were not. Comparison of the rotor solutions showed that the main difference between the two rotors was not blade camber as expected, but shock/boundary layer interaction on the casing.

  18. Supplementing Public Health Inspection via Social Media

    PubMed Central

    Schomberg, John P.; Haimson, Oliver L.; Hayes, Gillian R.; Anton-Culver, Hoda

    2016-01-01

    Foodborne illness is prevented by inspection and surveillance conducted by health departments across America. Appropriate restaurant behavior is enforced and monitored via public health inspections. However, surveillance coverage provided by state and local health departments is insufficient in preventing the rising number of foodborne illness outbreaks. To address this need for improved surveillance coverage we conducted a supplementary form of public health surveillance using social media data: Yelp.com restaurant reviews in the city of San Francisco. Yelp is a social media site where users post reviews and rate restaurants they have personally visited. Presence of keywords related to health code regulations and foodborne illness symptoms, number of restaurant reviews, number of Yelp stars, and restaurant price range were included in a model predicting a restaurant’s likelihood of health code violation measured by the assigned San Francisco public health code rating. For a list of major health code violations see (S1 Table). We built the predictive model using 71,360 Yelp reviews of restaurants in the San Francisco Bay Area. The predictive model was able to predict health code violations in 78% of the restaurants receiving serious citations in our pilot study of 440 restaurants. Training and validation data sets each pulled data from 220 restaurants in San Francisco. Keyword analysis of free text within Yelp not only improved detection of high-risk restaurants, but it also served to identify specific risk factors related to health code violation. To further validate our model we applied the model generated in our pilot study to Yelp data from 1,542 restaurants in San Francisco. The model achieved 91% sensitivity 74% specificity, area under the receiver operator curve of 98%, and positive predictive value of 29% (given a substandard health code rating prevalence of 10%). When our model was applied to restaurant reviews in New York City we achieved 74% sensitivity, 54% specificity, area under the receiver operator curve of 77%, and positive predictive value of 25% (given a prevalence of 12%). Model accuracy improved when reviews ranked highest by Yelp were utilized. Our results indicate that public health surveillance can be improved by using social media data to identify restaurants at high risk for health code violation. Additionally, using highly ranked Yelp reviews improves predictive power and limits the number of reviews needed to generate prediction. Use of this approach as an adjunct to current risk ranking of restaurants prior to inspection may enhance detection of those restaurants participating in high risk practices that may have gone previously undetected. This model represents a step forward in the integration of social media into meaningful public health interventions. PMID:27023681

  19. Supplementing Public Health Inspection via Social Media.

    PubMed

    Schomberg, John P; Haimson, Oliver L; Hayes, Gillian R; Anton-Culver, Hoda

    2016-01-01

    Foodborne illness is prevented by inspection and surveillance conducted by health departments across America. Appropriate restaurant behavior is enforced and monitored via public health inspections. However, surveillance coverage provided by state and local health departments is insufficient in preventing the rising number of foodborne illness outbreaks. To address this need for improved surveillance coverage we conducted a supplementary form of public health surveillance using social media data: Yelp.com restaurant reviews in the city of San Francisco. Yelp is a social media site where users post reviews and rate restaurants they have personally visited. Presence of keywords related to health code regulations and foodborne illness symptoms, number of restaurant reviews, number of Yelp stars, and restaurant price range were included in a model predicting a restaurant's likelihood of health code violation measured by the assigned San Francisco public health code rating. For a list of major health code violations see (S1 Table). We built the predictive model using 71,360 Yelp reviews of restaurants in the San Francisco Bay Area. The predictive model was able to predict health code violations in 78% of the restaurants receiving serious citations in our pilot study of 440 restaurants. Training and validation data sets each pulled data from 220 restaurants in San Francisco. Keyword analysis of free text within Yelp not only improved detection of high-risk restaurants, but it also served to identify specific risk factors related to health code violation. To further validate our model we applied the model generated in our pilot study to Yelp data from 1,542 restaurants in San Francisco. The model achieved 91% sensitivity 74% specificity, area under the receiver operator curve of 98%, and positive predictive value of 29% (given a substandard health code rating prevalence of 10%). When our model was applied to restaurant reviews in New York City we achieved 74% sensitivity, 54% specificity, area under the receiver operator curve of 77%, and positive predictive value of 25% (given a prevalence of 12%). Model accuracy improved when reviews ranked highest by Yelp were utilized. Our results indicate that public health surveillance can be improved by using social media data to identify restaurants at high risk for health code violation. Additionally, using highly ranked Yelp reviews improves predictive power and limits the number of reviews needed to generate prediction. Use of this approach as an adjunct to current risk ranking of restaurants prior to inspection may enhance detection of those restaurants participating in high risk practices that may have gone previously undetected. This model represents a step forward in the integration of social media into meaningful public health interventions.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  1. X-Antenna: A graphical interface for antenna analysis codes

    NASA Technical Reports Server (NTRS)

    Goldstein, B. L.; Newman, E. H.; Shamansky, H. T.

    1995-01-01

    This report serves as the user's manual for the X-Antenna code. X-Antenna is intended to simplify the analysis of antennas by giving the user graphical interfaces in which to enter all relevant antenna and analysis code data. Essentially, X-Antenna creates a Motif interface to the user's antenna analysis codes. A command-file allows new antennas and codes to be added to the application. The menu system and graphical interface screens are created dynamically to conform to the data in the command-file. Antenna data can be saved and retrieved from disk. X-Antenna checks all antenna and code values to ensure they are of the correct type, writes an output file, and runs the appropriate antenna analysis code. Volumetric pattern data may be viewed in 3D space with an external viewer run directly from the application. Currently, X-Antenna includes analysis codes for thin wire antennas (dipoles, loops, and helices), rectangular microstrip antennas, and thin slot antennas.

  2. Enhanced Sensitivity to Rapid Input Fluctuations by Nonlinear Threshold Dynamics in Neocortical Pyramidal Neurons.

    PubMed

    Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian

    2016-02-01

    The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.

  3. Validating malignant melanoma ICD-9-CM codes in Umbria, ASL Napoli 3 Sud and Friuli Venezia Giulia administrative healthcare databases: a diagnostic accuracy study.

    PubMed

    Orso, Massimiliano; Serraino, Diego; Abraha, Iosief; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Eusebi, Paolo; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore

    2018-04-20

    To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments.The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  4. Validating malignant melanoma ICD-9-CM codes in Umbria, ASL Napoli 3 Sud and Friuli Venezia Giulia administrative healthcare databases: a diagnostic accuracy study

    PubMed Central

    Orso, Massimiliano; Serraino, Diego; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore

    2018-01-01

    Objectives To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. Design A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Setting Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. Participants 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Outcome measures Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. Results The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments. The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. Conclusions The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. PMID:29678984

  5. Chiari malformation Type I surgery in pediatric patients. Part 1: validation of an ICD-9-CM code search algorithm.

    PubMed

    Ladner, Travis R; Greenberg, Jacob K; Guerrero, Nicole; Olsen, Margaret A; Shannon, Chevis N; Yarbrough, Chester K; Piccirillo, Jay F; Anderson, Richard C E; Feldstein, Neil A; Wellons, John C; Smyth, Matthew D; Park, Tae Sung; Limbrick, David D

    2016-05-01

    OBJECTIVE Administrative billing data may facilitate large-scale assessments of treatment outcomes for pediatric Chiari malformation Type I (CM-I). Validated International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) code algorithms for identifying CM-I surgery are critical prerequisites for such studies but are currently only available for adults. The objective of this study was to validate two ICD-9-CM code algorithms using hospital billing data to identify pediatric patients undergoing CM-I decompression surgery. METHODS The authors retrospectively analyzed the validity of two ICD-9-CM code algorithms for identifying pediatric CM-I decompression surgery performed at 3 academic medical centers between 2001 and 2013. Algorithm 1 included any discharge diagnosis code of 348.4 (CM-I), as well as a procedure code of 01.24 (cranial decompression) or 03.09 (spinal decompression or laminectomy). Algorithm 2 restricted this group to the subset of patients with a primary discharge diagnosis of 348.4. The positive predictive value (PPV) and sensitivity of each algorithm were calculated. RESULTS Among 625 first-time admissions identified by Algorithm 1, the overall PPV for CM-I decompression was 92%. Among the 581 admissions identified by Algorithm 2, the PPV was 97%. The PPV for Algorithm 1 was lower in one center (84%) compared with the other centers (93%-94%), whereas the PPV of Algorithm 2 remained high (96%-98%) across all subgroups. The sensitivity of Algorithms 1 (91%) and 2 (89%) was very good and remained so across subgroups (82%-97%). CONCLUSIONS An ICD-9-CM algorithm requiring a primary diagnosis of CM-I has excellent PPV and very good sensitivity for identifying CM-I decompression surgery in pediatric patients. These results establish a basis for utilizing administrative billing data to assess pediatric CM-I treatment outcomes.

  6. The reliability of diagnostic coding and laboratory data to identify tuberculosis and nontuberculous mycobacterial disease among rheumatoid arthritis patients using anti-tumor necrosis factor therapy.

    PubMed

    Winthrop, Kevin L; Baxter, Roger; Liu, Liyan; McFarland, Bentson; Austin, Donald; Varley, Cara; Radcliffe, LeAnn; Suhler, Eric; Choi, Dongsoek; Herrinton, Lisa J

    2011-03-01

    Anti-tumor necrosis factor-alpha (anti-TNF) therapies are associated with severe mycobacterial infections in rheumatoid arthritis patients. We developed and validated electronic record search algorithms for these serious infections. The study used electronic clinical, microbiologic, and pharmacy records from Kaiser Permanente Northern California (KPNC) and the Portland Veterans Affairs Medical Center (PVAMC). We identified suspect tuberculosis and nontuberculous mycobacteria (NTM) cases using inpatient and outpatient diagnostic codes, culture results, and anti-tuberculous medication dispensing. We manually reviewed records to validate our case-finding algorithms. We identified 64 tuberculosis and 367 NTM potential cases, respectively. For tuberculosis, diagnostic code positive predictive value (PPV) was 54% at KPNC and 9% at PVAMC. Adding medication dispensings improved these to 87% and 46%, respectively. Positive tuberculosis cultures had a PPV of 100% with sensitivities of 79% (KPNC) and 55% (PVAMC). For NTM, the PPV of diagnostic codes was 91% (KPNC) and 76% (PVAMC). At KPNC, ≥ 1 positive NTM culture was sensitive (100%) and specific (PPV, 74%) if non-pathogenic species were excluded; at PVAMC, ≥1 positive NTM culture identified 76% of cases with PPV of 41%. Application of the American Thoracic Society NTM microbiology criteria yielded the highest PPV (100% KPNC, 78% PVAMC). The sensitivity and predictive value of electronic microbiologic data for tuberculosis and NTM infections is generally high, but varies with different facilities or models of care. Unlike NTM, tuberculosis diagnostic codes have poor PPV, and in the absence of laboratory data, should be combined with anti-tuberculous therapy dispensings for pharmacoepidemiologic research. Copyright © 2010 John Wiley & Sons, Ltd.

  7. Validity of administrative database code algorithms to identify vascular access placement, surgical revisions, and secondary patency.

    PubMed

    Al-Jaishi, Ahmed A; Moist, Louise M; Oliver, Matthew J; Nash, Danielle M; Fleet, Jamie L; Garg, Amit X; Lok, Charmaine E

    2018-03-01

    We assessed the validity of physician billing codes and hospital admission using International Classification of Diseases 10th revision codes to identify vascular access placement, secondary patency, and surgical revisions in administrative data. We included adults (≥18 years) with a vascular access placed between 1 April 2004 and 31 March 2013 at the University Health Network, Toronto. Our reference standard was a prospective vascular access database (VASPRO) that contains information on vascular access type and dates of placement, dates for failure, and any revisions. We used VASPRO to assess the validity of different administrative coding algorithms by calculating the sensitivity, specificity, and positive predictive values of vascular access events. The sensitivity (95% confidence interval) of the best performing algorithm to identify arteriovenous access placement was 86% (83%, 89%) and specificity was 92% (89%, 93%). The corresponding numbers to identify catheter insertion were 84% (82%, 86%) and 84% (80%, 87%), respectively. The sensitivity of the best performing coding algorithm to identify arteriovenous access surgical revisions was 81% (67%, 90%) and specificity was 89% (87%, 90%). The algorithm capturing arteriovenous access placement and catheter insertion had a positive predictive value greater than 90% and arteriovenous access surgical revisions had a positive predictive value of 20%. The duration of arteriovenous access secondary patency was on average 578 (553, 603) days in VASPRO and 555 (530, 580) days in administrative databases. Administrative data algorithms have fair to good operating characteristics to identify vascular access placement and arteriovenous access secondary patency. Low positive predictive values for surgical revisions algorithm suggest that administrative data should only be used to rule out the occurrence of an event.

  8. Validity of Principal Diagnoses in Discharge Summaries and ICD-10 Coding Assessments Based on National Health Data of Thailand.

    PubMed

    Sukanya, Chongthawonsatid

    2017-10-01

    This study examined the validity of the principal diagnoses on discharge summaries and coding assessments. Data were collected from the National Health Security Office (NHSO) of Thailand in 2015. In total, 118,971 medical records were audited. The sample was drawn from government hospitals and private hospitals covered by the Universal Coverage Scheme in Thailand. Hospitals and cases were selected using NHSO criteria. The validity of the principal diagnoses listed in the "Summary and Coding Assessment" forms was established by comparing data from the discharge summaries with data obtained from medical record reviews, and additionally, by comparing data from the coding assessments with data in the computerized ICD (the data base used for reimbursement-purposes). The summary assessments had low sensitivities (7.3%-37.9%), high specificities (97.2%-99.8%), low positive predictive values (9.2%-60.7%), and high negative predictive values (95.9%-99.3%). The coding assessments had low sensitivities (31.1%-69.4%), high specificities (99.0%-99.9%), moderate positive predictive values (43.8%-89.0%), and high negative predictive values (97.3%-99.5%). The discharge summaries and codings often contained mistakes, particularly the categories "Endocrine, nutritional, and metabolic diseases", "Symptoms, signs, and abnormal clinical and laboratory findings not elsewhere classified", "Factors influencing health status and contact with health services", and "Injury, poisoning, and certain other consequences of external causes". The validity of the principal diagnoses on the summary and coding assessment forms was found to be low. The training of physicians and coders must be strengthened to improve the validity of discharge summaries and codings.

  9. Identification of Hospitalizations for Intentional Self-Harm when E-Codes are Incompletely Recorded

    PubMed Central

    Patrick, Amanda R.; Miller, Matthew; Barber, Catherine W.; Wang, Philip S.; Canning, Claire F.; Schneeweiss, Sebastian

    2010-01-01

    Context Suicidal behavior has gained attention as an adverse outcome of prescription drug use. Hospitalizations for intentional self-harm, including suicide, can be identified in administrative claims databases using external cause of injury codes (E-codes). However, rates of E-code completeness in US government and commercial claims databases are low due to issues with hospital billing software. Objective To develop an algorithm to identify intentional self-harm hospitalizations using recorded injury and psychiatric diagnosis codes in the absence of E-code reporting. Methods We sampled hospitalizations with an injury diagnosis (ICD-9 800–995) from 2 databases with high rates of E-coding completeness: 1999–2001 British Columbia, Canada data and the 2004 U.S. Nationwide Inpatient Sample. Our gold standard for intentional self-harm was a diagnosis of E950-E958. We constructed algorithms to identify these hospitalizations using information on type of injury and presence of specific psychiatric diagnoses. Results The algorithm that identified intentional self-harm hospitalizations with high sensitivity and specificity was a diagnosis of poisoning; toxic effects; open wound to elbow, wrist, or forearm; or asphyxiation; plus a diagnosis of depression, mania, personality disorder, psychotic disorder, or adjustment reaction. This had a sensitivity of 63%, specificity of 99% and positive predictive value (PPV) of 86% in the Canadian database. Values in the US data were 74%, 98%, and 73%. PPV was highest (80%) in patients under 25 and lowest those over 65 (44%). Conclusions The proposed algorithm may be useful for researchers attempting to study intentional self-harm in claims databases with incomplete E-code reporting, especially among younger populations. PMID:20922709

  10. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    PubMed

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  11. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Technical Reports Server (NTRS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-01-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  12. Reliability enhancement of Navier-Stokes codes through convergence enhancement

    NASA Astrophysics Data System (ADS)

    Choi, K.-Y.; Dulikravich, G. S.

    1993-11-01

    Reduction of total computing time required by an iterative algorithm for solving Navier-Stokes equations is an important aspect of making the existing and future analysis codes more cost effective. Several attempts have been made to accelerate the convergence of an explicit Runge-Kutta time-stepping algorithm. These acceleration methods are based on local time stepping, implicit residual smoothing, enthalpy damping, and multigrid techniques. Also, an extrapolation procedure based on the power method and the Minimal Residual Method (MRM) were applied to the Jameson's multigrid algorithm. The MRM uses same values of optimal weights for the corrections to every equation in a system and has not been shown to accelerate the scheme without multigriding. Our Distributed Minimal Residual (DMR) method based on our General Nonlinear Minimal Residual (GNLMR) method allows each component of the solution vector in a system of equations to have its own convergence speed. The DMR method was found capable of reducing the computation time by 10-75 percent depending on the test case and grid used. Recently, we have developed and tested a new method termed Sensitivity Based DMR or SBMR method that is easier to implement in different codes and is even more robust and computationally efficient than our DMR method.

  13. Limitations to the use of two-dimensional thermal modeling of a nuclear waste repository

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, B.W.

    1979-01-04

    Thermal modeling of a nuclear waste repository is basic to most waste management predictive models. It is important that the modeling techniques accurately determine the time-dependent temperature distribution of the waste emplacement media. Recent modeling studies show that the time-dependent temperature distribution can be accurately modeled in the far-field using a 2-dimensional (2-D) planar numerical model; however, the near-field cannot be modeled accurately enough by either 2-D axisymmetric or 2-D planar numerical models for repositories in salt. The accuracy limits of 2-D modeling were defined by comparing results from 3-dimensional (3-D) TRUMP modeling with results from both 2-D axisymmetric andmore » 2-D planar. Both TRUMP and ADINAT were employed as modeling tools. Two-dimensional results from the finite element code, ADINAT were compared with 2-D results from the finite difference code, TRUMP; they showed almost perfect correspondence in the far-field. This result adds substantially to confidence in future use of ADINAT and its companion stress code ADINA for thermal stress analysis. ADINAT was found to be somewhat sensitive to time step and mesh aspect ratio. 13 figures, 4 tables.« less

  14. Specific expression of novel long non-coding RNAs in high-hyperdiploid childhood acute lymphoblastic leukemia

    PubMed Central

    Drouin, Simon; Caron, Maxime; St-Onge, Pascal; Gioia, Romain; Richer, Chantal; Oualkacha, Karim; Droit, Arnaud; Sinnett, Daniel

    2017-01-01

    Pre-B cell childhood acute lymphoblastic leukemia (pre-B cALL) is a heterogeneous disease involving many subtypes typically stratified using a combination of cytogenetic and molecular-based assays. These methods, although widely used, rely on the presence of known chromosomal translocations, which is a limiting factor. There is therefore a need for robust, sensitive, and specific molecular biomarkers unaffected by such limitations that would allow better risk stratification and consequently better clinical outcome. In this study we performed a transcriptome analysis of 56 pre-B cALL patients to identify expression signatures in different subtypes. In both protein-coding and long non-coding RNAs (lncRNA), we identified subtype-specific gene signatures distinguishing pre-B cALL subtypes, particularly in t(12;21) and hyperdiploid cases. The genes up-regulated in pre-B cALL subtypes were enriched in bivalent chromatin marks in their promoters. LncRNAs is a new and under-studied class of transcripts. The subtype-specific nature of lncRNAs suggests they may be suitable clinical biomarkers to guide risk stratification and targeted therapies in pre-B cALL patients. PMID:28346506

  15. Improving the sensitivity of high-frequency subharmonic imaging with coded excitation: A feasibility study

    PubMed Central

    Shekhar, Himanshu; Doyley, Marvin M.

    2012-01-01

    Purpose: Subharmonic intravascular ultrasound imaging (S-IVUS) could visualize the adventitial vasa vasorum, but the high pressure threshold required to incite subharmonic behavior in an ultrasound contrast agent will compromise sensitivity—a trait that has hampered the clinical use of S-IVUS. The purpose of this study was to assess the feasibility of using coded-chirp excitations to improve the sensitivity and axial resolution of S-IVUS. Methods: The subharmonic response of Targestar-pTM, a commercial microbubble ultrasound contrast agent (UCA), to coded-chirp (5%–20% fractional bandwidth) pulses and narrowband sine-burst (4% fractional bandwidth) pulses was assessed, first using computer simulations and then experimentally. Rectangular windowed excitation pulses with pulse durations ranging from 0.25 to 3 μs were used in all studies. All experimental studies were performed with a pair of transducers (20 MHz/10 MHz), both with diameter of 6.35 mm and focal length of 50 mm. The size distribution of the UCA was measured with a CasyTM Cell counter. Results: The simulation predicted a pressure threshold that was an order of magnitude higher than that determined experimentally. However, all other predictions were consistent with the experimental observations. It was predicted that: (1) exciting the agent with chirps would produce stronger subharmonic response relative to those produced by sine-bursts; (2) increasing the fractional bandwidth of coded-chirp excitation would increase the sensitivity of subharmonic imaging; and (3) coded-chirp would increase axial resolution. The experimental results revealed that subharmonic-to-fundamental ratios obtained with chirps were 5.7 dB higher than those produced with sine-bursts of similar duration. The axial resolution achieved with 20% fractional bandwidth chirps was approximately twice that achieved with 4% fractional bandwidth sine-bursts. Conclusions: The coded-chirp method is a suitable excitation strategy for subharmonic IVUS imaging. At the 20 MHz transmission frequency and 20% fractional bandwidth, coded-chirp excitation appears to represent the ideal tradeoff between subharmonic strength and axial resolution. PMID:22482626

  16. Biobar-coded gold nanoparticles and DNAzyme-based dual signal amplification strategy for ultrasensitive detection of protein by electrochemiluminescence.

    PubMed

    Xia, Hui; Li, Lingling; Yin, Zhouyang; Hou, Xiandeng; Zhu, Jun-Jie

    2015-01-14

    A dual signal amplification strategy for electrochemiluminescence (ECL) aptasensor was designed based on biobar-coded gold nanoparticles (Au NPs) and DNAzyme. CdSeTe@ZnS quantum dots (QDs) were chosen as the ECL signal probes. To verify the proposed ultrasensitive ECL aptasensor for biomolecules, we detected thrombin (Tb) as a proof-of-principle analyte. The hairpin DNA designed for the recognition of protein consists of two parts: the sequences of catalytical 8-17 DNAzyme and thrombin aptamer. Only in the presence of thrombin could the hairpin DNA be opened, followed by a recycling cleavage of excess substrates by catalytic core of the DNAzyme to induce the first-step amplification. One part of the fragments was captured to open the capture DNA modified on the Au electrode, which further connected with the prepared biobar-coded Au NPs-CdSeTe@ZnS QDs to get the final dual-amplified ECL signal. The limit of detection for Tb was 0.28 fM with excellent selectivity, and this proposed method possessed good performance in real sample analysis. This design introduces the new concept of dual-signal amplification by a biobar-coded system and DNAzyme recycling into ECL determination, and it is promising to be extended to provide a highly sensitive platform for various target biomolecules.

  17. Development of an Efficient Entire-Capsid-Coding-Region Amplification Method for Direct Detection of Poliovirus from Stool Extracts

    PubMed Central

    Kilpatrick, David R.; Nakamura, Tomofumi; Burns, Cara C.; Bukbuk, David; Oderinde, Soji B.; Oberste, M. Steven; Kew, Olen M.; Pallansch, Mark A.; Shimizu, Hiroyuki

    2014-01-01

    Laboratory diagnosis has played a critical role in the Global Polio Eradication Initiative since 1988, by isolating and identifying poliovirus (PV) from stool specimens by using cell culture as a highly sensitive system to detect PV. In the present study, we aimed to develop a molecular method to detect PV directly from stool extracts, with a high efficiency comparable to that of cell culture. We developed a method to efficiently amplify the entire capsid coding region of human enteroviruses (EVs) including PV. cDNAs of the entire capsid coding region (3.9 kb) were obtained from as few as 50 copies of PV genomes. PV was detected from the cDNAs with an improved PV-specific real-time reverse transcription-PCR system and nucleotide sequence analysis of the VP1 coding region. For assay validation, we analyzed 84 stool extracts that were positive for PV in cell culture and detected PV genomes from 100% of the extracts (84/84 samples) with this method in combination with a PV-specific extraction method. PV could be detected in 2/4 stool extract samples that were negative for PV in cell culture. In PV-positive samples, EV species C viruses were also detected with high frequency (27% [23/86 samples]). This method would be useful for direct detection of PV from stool extracts without using cell culture. PMID:25339406

  18. An assessment of the CORCON-MOD3 code. Part 1: Thermal-hydraulic calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strizhov, V.; Kanukova, V.; Vinogradova, T.

    1996-09-01

    This report deals with the subject of CORCON-Mod3 code validation (thermal-hydraulic modeling capability only) based on MCCI (molten core concrete interaction) experiments conducted under different programs in the past decade. Thermal-hydraulic calculations (i.e., concrete ablation, melt temperature, melt energy, concrete temperature, and condensible and non-condensible gas generation) were performed with the code, and compared with the data from 15 experiments, conducted at different scales using both simulant (metallic and oxidic) and prototypic melt materials, using different concrete types, and with and without an overlying water pool. Sensitivity studies were performed in a few cases involving, for example, heat transfer frommore » melt to concrete, condensed phase chemistry, etc. Further, special analysis was performed using the ACE L8 experimental data to illustrate the differences between the experimental and the reactor conditions, and to demonstrate that with proper corrections made to the code, the calculated results were in better agreement with the experimental data. Generally, in the case of dry cavity and metallic melts, CORCON-Mod3 thermal-hydraulic calculations were in good agreement with the test data. For oxidic melts in a dry cavity, uncertainties in heat transfer models played an important role for two melt configurations--a stratified geometry with segregated metal and oxide layers, and a heterogeneous mixture. Some discrepancies in the gas release data were noted in a few cases.« less

  19. Drug Overdose Surveillance Using Hospital Discharge Data

    PubMed Central

    Bunn, Terry L.; Talbert, Jeffery

    2014-01-01

    Objectives We compared three methods for identifying drug overdose cases in inpatient hospital discharge data on their ability to classify drug overdoses by intent and drug type(s) involved. Methods We compared three International Classification of Diseases, Ninth Revision, Clinical Modification code-based case definitions using Kentucky hospital discharge data for 2000–2011. The first definition (Definition 1) was based on the external-cause-of-injury (E-code) matrix. The other two definitions were based on the Injury Surveillance Workgroup on Poisoning (ISW7) consensus recommendations for national and state poisoning surveillance using the principal diagnosis or first E-code (Definition 2) or any diagnosis/E-code (Definition 3). Results Definition 3 identified almost 50% more drug overdose cases than did Definition 1. The increase was largely due to cases with a first-listed E-code describing a drug overdose but a principal diagnosis that was different from drug overdose (e.g., mental disorders, or respiratory or circulatory system failure). Regardless of the definition, more than 53% of the hospitalizations were self-inflicted drug overdoses; benzodiazepines were involved in about 30% of the hospitalizations. The 2011 age-adjusted drug overdose hospitalization rate in Kentucky was 146/100,000 population using Definition 3 and 107/100,000 population using Definition 1. Conclusion The ISW7 drug overdose definition using any drug poisoning diagnosis/E-code (Definition 3) is potentially the highest sensitivity definition for counting drug overdose hospitalizations, including by intent and drug type(s) involved. As the states enact policies and plan for adequate treatment resources, standardized drug overdose definitions are critical for accurate reporting, trend analysis, policy evaluation, and state-to-state comparison. PMID:25177055

  20. Methodology for fast detection of false sharing in threaded scientific codes

    DOEpatents

    Chung, I-Hsin; Cong, Guojing; Murata, Hiroki; Negishi, Yasushi; Wen, Hui-Fang

    2014-11-25

    A profiling tool identifies a code region with a false sharing potential. A static analysis tool classifies variables and arrays in the identified code region. A mapping detection library correlates memory access instructions in the identified code region with variables and arrays in the identified code region while a processor is running the identified code region. The mapping detection library identifies one or more instructions at risk, in the identified code region, which are subject to an analysis by a false sharing detection library. A false sharing detection library performs a run-time analysis of the one or more instructions at risk while the processor is re-running the identified code region. The false sharing detection library determines, based on the performed run-time analysis, whether two different portions of the cache memory line are accessed by the generated binary code.

  1. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  2. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding.

    PubMed

    Zhang, Xuncai; Han, Feng; Niu, Ying

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis.

  3. Chaotic Image Encryption Algorithm Based on Bit Permutation and Dynamic DNA Encoding

    PubMed Central

    2017-01-01

    With the help of the fact that chaos is sensitive to initial conditions and pseudorandomness, combined with the spatial configurations in the DNA molecule's inherent and unique information processing ability, a novel image encryption algorithm based on bit permutation and dynamic DNA encoding is proposed here. The algorithm first uses Keccak to calculate the hash value for a given DNA sequence as the initial value of a chaotic map; second, it uses a chaotic sequence to scramble the image pixel locations, and the butterfly network is used to implement the bit permutation. Then, the image is coded into a DNA matrix dynamic, and an algebraic operation is performed with the DNA sequence to realize the substitution of the pixels, which further improves the security of the encryption. Finally, the confusion and diffusion properties of the algorithm are further enhanced by the operation of the DNA sequence and the ciphertext feedback. The results of the experiment and security analysis show that the algorithm not only has a large key space and strong sensitivity to the key but can also effectively resist attack operations such as statistical analysis and exhaustive analysis. PMID:28912802

  4. Prostate Cancer Information Available in Health-Care Provider Offices: An Analysis of Content, Readability, and Cultural Sensitivity.

    PubMed

    Choi, Seul Ki; Seel, Jessica S; Yelton, Brooks; Steck, Susan E; McCormick, Douglas P; Payne, Johnny; Minter, Anthony; Deutchki, Elizabeth K; Hébert, James R; Friedman, Daniela B

    2018-07-01

    Prostate cancer (PrCA) is the most common cancer affecting men in the United States, and African American men have the highest incidence among men in the United States. Little is known about the PrCA-related educational materials being provided to patients in health-care settings. Content, readability, and cultural sensitivity of materials available in providers' practices in South Carolina were examined. A total of 44 educational materials about PrCA and associated sexual dysfunction was collected from 16 general and specialty practices. The content of the materials was coded, and cultural sensitivity was assessed using the Cultural Sensitivity Assessment Tool. Flesch Reading Ease, Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook were used to assess readability. Communication with health-care providers (52.3%), side effects of PrCA treatment (40.9%), sexual dysfunction and its treatment (38.6%), and treatment options (34.1%) were frequently presented. All materials had acceptable cultural sensitivity scores; however, 2.3% and 15.9% of materials demonstrated unacceptable cultural sensitivity regarding format and visual messages, respectively. Readability of the materials varied. More than half of the materials were written above a high-school reading level. PrCA-related materials available in health-care practices may not meet patients' needs regarding content, cultural sensitivity, and readability. A wide range of educational materials that address various aspects of PrCA, including treatment options and side effects, should be presented in plain language and be culturally sensitive.

  5. Level of Agreement and Factors Associated With Discrepancies Between Nationwide Medical History Questionnaires and Hospital Claims Data.

    PubMed

    Kim, Yeon-Yong; Park, Jong Heon; Kang, Hee-Jin; Lee, Eun Joo; Ha, Seongjun; Shin, Soon-Ae

    2017-09-01

    The objectives of this study were to investigate the agreement between medical history questionnaire data and claims data and to identify the factors that were associated with discrepancies between these data types. Data from self-reported questionnaires that assessed an individual's history of hypertension, diabetes mellitus, dyslipidemia, stroke, heart disease, and pulmonary tuberculosis were collected from a general health screening database for 2014. Data for these diseases were collected from a healthcare utilization claims database between 2009 and 2014. Overall agreement, sensitivity, specificity, and kappa values were calculated. Multiple logistic regression analysis was performed to identify factors associated with discrepancies and was adjusted for age, gender, insurance type, insurance contribution, residential area, and comorbidities. Agreement was highest between questionnaire data and claims data based on primary codes up to 1 year before the completion of self-reported questionnaires and was lowest for claims data based on primary and secondary codes up to 5 years before the completion of self-reported questionnaires. When comparing data based on primary codes up to 1 year before the completion of self-reported questionnaires, the overall agreement, sensitivity, specificity, and kappa values ranged from 93.2 to 98.8%, 26.2 to 84.3%, 95.7 to 99.6%, and 0.09 to 0.78, respectively. Agreement was excellent for hypertension and diabetes, fair to good for stroke and heart disease, and poor for pulmonary tuberculosis and dyslipidemia. Women, younger individuals, and employed individuals were most likely to under-report disease. Detailed patient characteristics that had an impact on information bias were identified through the differing levels of agreement.

  6. Sensorineural hearing loss amplifies neural coding of envelope information in the central auditory system of chinchillas

    PubMed Central

    Zhong, Ziwei; Henry, Kenneth S.; Heinz, Michael G.

    2014-01-01

    People with sensorineural hearing loss often have substantial difficulty understanding speech under challenging listening conditions. Behavioral studies suggest that reduced sensitivity to the temporal structure of sound may be responsible, but underlying neurophysiological pathologies are incompletely understood. Here, we investigate the effects of noise-induced hearing loss on coding of envelope (ENV) structure in the central auditory system of anesthetized chinchillas. ENV coding was evaluated noninvasively using auditory evoked potentials recorded from the scalp surface in response to sinusoidally amplitude modulated tones with carrier frequencies of 1, 2, 4, and 8 kHz and a modulation frequency of 140 Hz. Stimuli were presented in quiet and in three levels of white background noise. The latency of scalp-recorded ENV responses was consistent with generation in the auditory midbrain. Hearing loss amplified neural coding of ENV at carrier frequencies of 2 kHz and above. This result may reflect enhanced ENV coding from the periphery and/or an increase in the gain of central auditory neurons. In contrast to expectations, hearing loss was not associated with a stronger adverse effect of increasing masker intensity on ENV coding. The exaggerated neural representation of ENV information shown here at the level of the auditory midbrain helps to explain previous findings of enhanced sensitivity to amplitude modulation in people with hearing loss under some conditions. Furthermore, amplified ENV coding may potentially contribute to speech perception problems in people with cochlear hearing loss by acting as a distraction from more salient acoustic cues, particularly in fluctuating backgrounds. PMID:24315815

  7. Detecting chronic kidney disease in population-based administrative databases using an algorithm of hospital encounter and physician claim codes.

    PubMed

    Fleet, Jamie L; Dixon, Stephanie N; Shariff, Salimah Z; Quinn, Robert R; Nash, Danielle M; Harel, Ziv; Garg, Amit X

    2013-04-05

    Large, population-based administrative healthcare databases can be used to identify patients with chronic kidney disease (CKD) when serum creatinine laboratory results are unavailable. We examined the validity of algorithms that used combined hospital encounter and physician claims database codes for the detection of CKD in Ontario, Canada. We accrued 123,499 patients over the age of 65 from 2007 to 2010. All patients had a baseline serum creatinine value to estimate glomerular filtration rate (eGFR). We developed an algorithm of physician claims and hospital encounter codes to search administrative databases for the presence of CKD. We determined the sensitivity, specificity, positive and negative predictive values of this algorithm to detect our primary threshold of CKD, an eGFR <45 mL/min per 1.73 m² (15.4% of patients). We also assessed serum creatinine and eGFR values in patients with and without CKD codes (algorithm positive and negative, respectively). Our algorithm required evidence of at least one of eleven CKD codes and 7.7% of patients were algorithm positive. The sensitivity was 32.7% [95% confidence interval: (95% CI): 32.0 to 33.3%]. Sensitivity was lower in women compared to men (25.7 vs. 43.7%; p <0.001) and in the oldest age category (over 80 vs. 66 to 80; 28.4 vs. 37.6 %; p < 0.001). All specificities were over 94%. The positive and negative predictive values were 65.4% (95% CI: 64.4 to 66.3%) and 88.8% (95% CI: 88.6 to 89.0%), respectively. In algorithm positive patients, the median [interquartile range (IQR)] baseline serum creatinine value was 135 μmol/L (106 to 179 μmol/L) compared to 82 μmol/L (69 to 98 μmol/L) for algorithm negative patients. Corresponding eGFR values were 38 mL/min per 1.73 m² (26 to 51 mL/min per 1.73 m²) vs. 69 mL/min per 1.73 m² (56 to 82 mL/min per 1.73 m²), respectively. Patients with CKD as identified by our database algorithm had distinctly higher baseline serum creatinine values and lower eGFR values than those without such codes. However, because of limited sensitivity, the prevalence of CKD was underestimated.

  8. Detecting chronic kidney disease in population-based administrative databases using an algorithm of hospital encounter and physician claim codes

    PubMed Central

    2013-01-01

    Background Large, population-based administrative healthcare databases can be used to identify patients with chronic kidney disease (CKD) when serum creatinine laboratory results are unavailable. We examined the validity of algorithms that used combined hospital encounter and physician claims database codes for the detection of CKD in Ontario, Canada. Methods We accrued 123,499 patients over the age of 65 from 2007 to 2010. All patients had a baseline serum creatinine value to estimate glomerular filtration rate (eGFR). We developed an algorithm of physician claims and hospital encounter codes to search administrative databases for the presence of CKD. We determined the sensitivity, specificity, positive and negative predictive values of this algorithm to detect our primary threshold of CKD, an eGFR <45 mL/min per 1.73 m2 (15.4% of patients). We also assessed serum creatinine and eGFR values in patients with and without CKD codes (algorithm positive and negative, respectively). Results Our algorithm required evidence of at least one of eleven CKD codes and 7.7% of patients were algorithm positive. The sensitivity was 32.7% [95% confidence interval: (95% CI): 32.0 to 33.3%]. Sensitivity was lower in women compared to men (25.7 vs. 43.7%; p <0.001) and in the oldest age category (over 80 vs. 66 to 80; 28.4 vs. 37.6 %; p < 0.001). All specificities were over 94%. The positive and negative predictive values were 65.4% (95% CI: 64.4 to 66.3%) and 88.8% (95% CI: 88.6 to 89.0%), respectively. In algorithm positive patients, the median [interquartile range (IQR)] baseline serum creatinine value was 135 μmol/L (106 to 179 μmol/L) compared to 82 μmol/L (69 to 98 μmol/L) for algorithm negative patients. Corresponding eGFR values were 38 mL/min per 1.73 m2 (26 to 51 mL/min per 1.73 m2) vs. 69 mL/min per 1.73 m2 (56 to 82 mL/min per 1.73 m2), respectively. Conclusions Patients with CKD as identified by our database algorithm had distinctly higher baseline serum creatinine values and lower eGFR values than those without such codes. However, because of limited sensitivity, the prevalence of CKD was underestimated. PMID:23560464

  9. msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.

    PubMed

    Pérez-Figueroa, A

    2013-05-01

    In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.

  10. An incremental strategy for calculating consistent discrete CFD sensitivity derivatives

    NASA Technical Reports Server (NTRS)

    Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.

    1992-01-01

    In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.

  11. 1 D analysis of Radiative Shock damping by lateral radiative losses

    NASA Astrophysics Data System (ADS)

    Busquet, Michel; Audit, Edouard

    2008-11-01

    We have demonstrated the effect of the lateral radiative losses in radiative shocks propagative in layered quasi-planar atmospheres.[1,2] The damping of the precursor is sensitive to the fraction of self-emitted radiation reflected by the walls (called albedo) We have given recently an experimental determination of the wall albedo.[2] For parametric analysis of this effect, we implement lateral losses in the 1D hydro-rad code MULTI [3] and compared results with 2D simulations. [1] S.Leygnac, et al., Phys. Plasmas 13, 113301 (2006) [2] M.Busquet, et al, High Energy Density Plasmas 3, 8-11 (2007); M.Gonzalez, et al, Laser Part. Beams 24, 1-6 (2006) [3] Ramis et al, Comp. Phys. Comm., 49, 475 (1988)

  12. Analysis of longwave radiation for the Earth-atmosphere system

    NASA Technical Reports Server (NTRS)

    Tiwari, S. N.; Venuru, C. S.; Subramanian, S. V.

    1983-01-01

    Accurate radiative transfer models are used to determine the upwelling atmospheric radiance and net radiative flux in the entire longwave spectral range. The validity of the quasi-random band model is established by comparing the results of this model with those of line-by-line formulations and with available theoretical and experimental results. Existing radiative transfer models and computer codes are modified to include various surface and atmospheric effects (surface reflection, nonequilibrium radiation, and cloud effects). The program is used to evaluate the radiative flux in clear atmosphere, provide sensitivity analysis of upwelling radiance in the presence of clouds, and determine the effects of various climatological parameters on the upwelling radiation and anisotropic function. Homogeneous and nonhomogeneous gas emissivities can also be evaluated under different conditions.

  13. Using information content and base frequencies to distinguish mutations from genetic polymorphisms in splice junction recognition sites.

    PubMed

    Rogan, P K; Schneider, T D

    1995-01-01

    Predicting the effects of nucleotide substitutions in human splice sites has been based on analysis of consensus sequences. We used a graphic representation of sequence conservation and base frequency, the sequence logo, to demonstrate that a change in a splice acceptor of hMSH2 (a gene associated with familial nonpolyposis colon cancer) probably does not reduce splicing efficiency. This confirms a population genetic study that suggested that this substitution is a genetic polymorphism. The information theory-based sequence logo is quantitative and more sensitive than the corresponding splice acceptor consensus sequence for detection of true mutations. Information analysis may potentially be used to distinguish polymorphisms from mutations in other types of transcriptional, translational, or protein-coding motifs.

  14. Evaluation of surveillance methods for staphylococcal toxic shock syndrome.

    PubMed

    Lesher, Lindsey; Devries, Aaron; Danila, Richard; Lynfield, Ruth

    2009-05-01

    We compared passive surveillance and International Classification of Diseases, 9th Revision, codes for completeness of staphylococcal toxic shock syndrome (TSS) surveillance in the Minneapolis-St. Paul area, Minnesota, USA. TSS-specific codes identified 55% of cases compared with 30% by passive surveillance and were more sensitive (p = 0.0005, McNemar chi2 12.25).

  15. Evaluation of Surveillance Methods for Staphylococcal Toxic Shock Syndrome

    PubMed Central

    DeVries, Aaron; Danila, Richard; Lynfield, Ruth

    2009-01-01

    We compared passive surveillance and International Classification of Diseases, 9th Revision, codes for completeness of staphylococcal toxic shock syndrome (TSS) surveillance in the Minneapolis–St. Paul area, Minnesota, USA. TSS-specific codes identified 55% of cases compared with 30% by passive surveillance and were more sensitive (p = 0.0005, McNemar χ2 12.25). PMID:19402965

  16. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  17. Choroidal OCT

    NASA Astrophysics Data System (ADS)

    Esmaeelpour, Marieh; Drexler, Wolfgang

    Novel imaging devices, imaging strategies and automated image analysis with optical coherence tomography have improved our understanding of the choroid in health and pathology. Non-invasive in-vivo high resolution choroidal imaging has had its highest impact in the investigation of macular diseases such as diabetes macular edema and age-related macular degeneration. Choroidal thickness may provide a clinically feasible measure of disease stage and treatment success. It will even support disease diagnosis and phenotyping as is demonstrated in this chapter. Utilizing color coded thickness mapping of the choroid and its Sattler's and Haller's layer may further strengthen the sensitivity of the investigation findings.

  18. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Astrophysics Data System (ADS)

    1981-09-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  19. Hydrogen analysis depth calibration by CORTEO Monte-Carlo simulation

    NASA Astrophysics Data System (ADS)

    Moser, M.; Reichart, P.; Bergmaier, A.; Greubel, C.; Schiettekatte, F.; Dollinger, G.

    2016-03-01

    Hydrogen imaging with sub-μm lateral resolution and sub-ppm sensitivity has become possible with coincident proton-proton (pp) scattering analysis (Reichart et al., 2004). Depth information is evaluated from the energy sum signal with respect to energy loss of both protons on their path through the sample. In first order, there is no angular dependence due to elastic scattering. In second order, a path length effect due to different energy loss on the paths of the protons causes an angular dependence of the energy sum. Therefore, the energy sum signal has to be de-convoluted depending on the matrix composition, i.e. mainly the atomic number Z, in order to get a depth calibrated hydrogen profile. Although the path effect can be calculated analytically in first order, multiple scattering effects lead to significant deviations in the depth profile. Hence, in our new approach, we use the CORTEO Monte-Carlo code (Schiettekatte, 2008) in order to calculate the depth of a coincidence event depending on the scattering angle. The code takes individual detector geometry into account. In this paper we show, that the code correctly reproduces measured pp-scattering energy spectra with roughness effects considered. With more than 100 μm thick Mylar-sandwich targets (Si, Fe, Ge) we demonstrate the deconvolution of the energy spectra on our current multistrip detector at the microprobe SNAKE at the Munich tandem accelerator lab. As a result, hydrogen profiles can be evaluated with an accuracy in depth of about 1% of the sample thickness.

  20. Transonic Drag Prediction on a DLR-F6 Transport Configuration Using Unstructured Grid Solvers

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Frink, N. T.; Mavriplis, D. J.; Rausch, R. D.; Milholen, W. E.

    2004-01-01

    A second international AIAA Drag Prediction Workshop (DPW-II) was organized and held in Orlando Florida on June 21-22, 2003. The primary purpose was to inves- tigate the code-to-code uncertainty. address the sensitivity of the drag prediction to grid size and quantify the uncertainty in predicting nacelle/pylon drag increments at a transonic cruise condition. This paper presents an in-depth analysis of the DPW-II computational results from three state-of-the-art unstructured grid Navier-Stokes flow solvers exercised on similar families of tetrahedral grids. The flow solvers are USM3D - a tetrahedral cell-centered upwind solver. FUN3D - a tetrahedral node-centered upwind solver, and NSU3D - a general element node-centered central-differenced solver. For the wingbody, the total drag predicted for a constant-lift transonic cruise condition showed a decrease in code-to-code variation with grid refinement as expected. For the same flight condition, the wing/body/nacelle/pylon total drag and the nacelle/pylon drag increment predicted showed an increase in code-to-code variation with grid refinement. Although the range in total drag for the wingbody fine grids was only 5 counts, a code-to-code comparison of surface pressures and surface restricted streamlines indicated that the three solvers were not all converging to the same flow solutions- different shock locations and separation patterns were evident. Similarly, the wing/body/nacelle/pylon solutions did not appear to be converging to the same flow solutions. Overall, grid refinement did not consistently improve the correlation with experimental data for either the wingbody or the wing/body/nacelle pylon configuration. Although the absolute values of total drag predicted by two of the solvers for the medium and fine grids did not compare well with the experiment, the incremental drag predictions were within plus or minus 3 counts of the experimental data. The correlation with experimental incremental drag was not significantly changed by specifying transition. Although the sources of code-to-code variation in force and moment predictions for the three unstructured grid codes have not yet been identified, the current study reinforces the necessity of applying multiple codes to the same application to assess uncertainty.

Top