Sample records for provide sensitivity analysis

  1. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  2. Sensitivity analysis of water consumption in an office building

    NASA Astrophysics Data System (ADS)

    Suchacek, Tomas; Tuhovcak, Ladislav; Rucka, Jan

    2018-02-01

    This article deals with sensitivity analysis of real water consumption in an office building. During a long-term real study, reducing of pressure in its water connection was simulated. A sensitivity analysis of uneven water demand was conducted during working time at various provided pressures and at various time step duration. Correlations between maximal coefficients of water demand variation during working time and provided pressure were suggested. The influence of provided pressure in the water connection on mean coefficients of water demand variation was pointed out, altogether for working hours of all days and separately for days with identical working hours.

  3. Sensitive and inexpensive digital DNA analysis by microfluidic enrichment of rolling circle amplified single-molecules

    PubMed Central

    Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A.M.

    2017-01-01

    Abstract Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. PMID:28077562

  4. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  5. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  6. Sensitive and inexpensive digital DNA analysis by microfluidic enrichment of rolling circle amplified single-molecules.

    PubMed

    Kühnemund, Malte; Hernández-Neuta, Iván; Sharif, Mohd Istiaq; Cornaglia, Matteo; Gijs, Martin A M; Nilsson, Mats

    2017-05-05

    Single molecule quantification assays provide the ultimate sensitivity and precision for molecular analysis. However, most digital analysis techniques, i.e. droplet PCR, require sophisticated and expensive instrumentation for molecule compartmentalization, amplification and analysis. Rolling circle amplification (RCA) provides a simpler means for digital analysis. Nevertheless, the sensitivity of RCA assays has until now been limited by inefficient detection methods. We have developed a simple microfluidic strategy for enrichment of RCA products into a single field of view of a low magnification fluorescent sensor, enabling ultra-sensitive digital quantification of nucleic acids over a dynamic range from 1.2 aM to 190 fM. We prove the broad applicability of our analysis platform by demonstrating 5-plex detection of as little as ∼1 pg (∼300 genome copies) of pathogenic DNA with simultaneous antibiotic resistance marker detection, and the analysis of rare oncogene mutations. Our method is simpler, more cost-effective and faster than other digital analysis techniques and provides the means to implement digital analysis in any laboratory equipped with a standard fluorescent microscope. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Grid sensitivity for aerodynamic optimization and flow analysis

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1993-01-01

    After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.

  8. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  9. Atmospheric correction of ocean color sensors: analysis of the effects of residual instrument polarization sensitivity.

    PubMed

    Gordon, H R; Du, T; Zhang, T

    1997-09-20

    We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.

  10. Lyapunov exponents, covariant vectors and shadowing sensitivity analysis of 3D wakes: from laminar to chaotic regimes

    NASA Astrophysics Data System (ADS)

    Wang, Qiqi; Rigas, Georgios; Esclapez, Lucas; Magri, Luca; Blonigan, Patrick

    2016-11-01

    Bluff body flows are of fundamental importance to many engineering applications involving massive flow separation and in particular the transport industry. Coherent flow structures emanating in the wake of three-dimensional bluff bodies, such as cars, trucks and lorries, are directly linked to increased aerodynamic drag, noise and structural fatigue. For low Reynolds laminar and transitional regimes, hydrodynamic stability theory has aided the understanding and prediction of the unstable dynamics. In the same framework, sensitivity analysis provides the means for efficient and optimal control, provided the unstable modes can be accurately predicted. However, these methodologies are limited to laminar regimes where only a few unstable modes manifest. Here we extend the stability analysis to low-dimensional chaotic regimes by computing the Lyapunov covariant vectors and their associated Lyapunov exponents. We compare them to eigenvectors and eigenvalues computed in traditional hydrodynamic stability analysis. Computing Lyapunov covariant vectors and Lyapunov exponents also enables the extension of sensitivity analysis to chaotic flows via the shadowing method. We compare the computed shadowing sensitivities to traditional sensitivity analysis. These Lyapunov based methodologies do not rely on mean flow assumptions, and are mathematically rigorous for calculating sensitivities of fully unsteady flow simulations.

  11. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  12. Cost-effectiveness of training rural providers to identify and treat patients at risk for fragility fractures.

    PubMed

    Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J

    2014-12-01

    This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.

  13. DGSA: A Matlab toolbox for distance-based generalized sensitivity analysis of geoscientific computer experiments

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Yang, Guang; Satija, Addy; Scheidt, Céline; Caers, Jef

    2016-12-01

    Sensitivity analysis plays an important role in geoscientific computer experiments, whether for forecasting, data assimilation or model calibration. In this paper we focus on an extension of a method of regionalized sensitivity analysis (RSA) to applications typical in the Earth Sciences. Such applications involve the building of large complex spatial models, the application of computationally extensive forward modeling codes and the integration of heterogeneous sources of model uncertainty. The aim of this paper is to be practical: 1) provide a Matlab code, 2) provide novel visualization methods to aid users in getting a better understanding in the sensitivity 3) provide a method based on kernel principal component analysis (KPCA) and self-organizing maps (SOM) to account for spatial uncertainty typical in Earth Science applications and 4) provide an illustration on a real field case where the above mentioned complexities present themselves. We present methods that extend the original RSA method in several ways. First we present the calculation of conditional effects, defined as the sensitivity of a parameter given a level of another parameters. Second, we show how this conditional effect can be used to choose nominal values or ranges to fix insensitive parameters aiming to minimally affect uncertainty in the response. Third, we develop a method based on KPCA and SOM to assign a rank to spatial models in order to calculate the sensitivity on spatial variability in the models. A large oil/gas reservoir case is used as illustration of these ideas.

  14. A Bayesian Network Based Global Sensitivity Analysis Method for Identifying Dominant Processes in a Multi-physics Model

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2016-12-01

    Sensitivity analysis has been an important tool in groundwater modeling to identify the influential parameters. Among various sensitivity analysis methods, the variance-based global sensitivity analysis has gained popularity for its model independence characteristic and capability of providing accurate sensitivity measurements. However, the conventional variance-based method only considers uncertainty contribution of single model parameters. In this research, we extended the variance-based method to consider more uncertainty sources and developed a new framework to allow flexible combinations of different uncertainty components. We decompose the uncertainty sources into a hierarchical three-layer structure: scenario, model and parametric. Furthermore, each layer of uncertainty source is capable of containing multiple components. An uncertainty and sensitivity analysis framework was then constructed following this three-layer structure using Bayesian network. Different uncertainty components are represented as uncertain nodes in this network. Through the framework, variance-based sensitivity analysis can be implemented with great flexibility of using different grouping strategies for uncertainty components. The variance-based sensitivity analysis thus is improved to be able to investigate the importance of an extended range of uncertainty sources: scenario, model, and other different combinations of uncertainty components which can represent certain key model system processes (e.g., groundwater recharge process, flow reactive transport process). For test and demonstration purposes, the developed methodology was implemented into a test case of real-world groundwater reactive transport modeling with various uncertainty sources. The results demonstrate that the new sensitivity analysis method is able to estimate accurate importance measurements for any uncertainty sources which were formed by different combinations of uncertainty components. The new methodology can provide useful information for environmental management and decision-makers to formulate policies and strategies.

  15. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  16. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  17. Ageing of Insensitive DNAN Based Melt-Cast Explosives

    DTIC Science & Technology

    2014-08-01

    diurnal cycle (representative of the MEAO climate). Analysis of the ingredient composition, sensitiveness, mechanical and thermal properties was...first test condition was chosen to provide a worst-case scenario. Analysis of the ingredient composition, theoretical maximum density, sensitiveness...5 4.1.1 ARX-4027 Ingredient Analysis .............................................................. 5 4.1.2 ARX-4028 Ingredient Analysis

  18. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  19. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  20. High-sensitivity ESCA instrument

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davies, R.D.; Herglotz, H.K.; Lee, J.D.

    1973-01-01

    A new electron spectroscopy for chemical analysis (ESCA) instrument has been developed to provide high sensitivity and efficient operation for laboratory analysis of composition and chemical bonding in very thin surface layers of solid samples. High sensitivity is achieved by means of the high-intensity, efficient x-ray source described by Davies and Herglotz at the 1968 Denver X-Ray Conference, in combination with the new electron energy analyzer described by Lee at the 1972 Pittsburgh Conference on Analytical Chemistry and Applied Spectroscopy. A sample chamber designed to provide for rapid introduction and replacement of samples has adequate facilities for various sample treatmentsmore » and conditiouing followed immediately by ESCA analysis of the sample. Examples of application are presented, demonstrating the sensitivity and resolution achievable with this instrument. Its usefulness in trace surface analysis is shown and some chemical shifts'' measured by the instrument are compared with those obtained by x-ray spectroscopy. (auth)« less

  1. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  2. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  3. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  4. Integrating model behavior, optimization, and sensitivity/uncertainty analysis: overview and application of the MOUSE software toolbox

    USDA-ARS?s Scientific Manuscript database

    This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...

  5. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  6. Design Optimization Method for Composite Components Based on Moment Reliability-Sensitivity Criteria

    NASA Astrophysics Data System (ADS)

    Sun, Zhigang; Wang, Changxi; Niu, Xuming; Song, Yingdong

    2017-08-01

    In this paper, a Reliability-Sensitivity Based Design Optimization (RSBDO) methodology for the design of the ceramic matrix composites (CMCs) components has been proposed. A practical and efficient method for reliability analysis and sensitivity analysis of complex components with arbitrary distribution parameters are investigated by using the perturbation method, the respond surface method, the Edgeworth series and the sensitivity analysis approach. The RSBDO methodology is then established by incorporating sensitivity calculation model into RBDO methodology. Finally, the proposed RSBDO methodology is applied to the design of the CMCs components. By comparing with Monte Carlo simulation, the numerical results demonstrate that the proposed methodology provides an accurate, convergent and computationally efficient method for reliability-analysis based finite element modeling engineering practice.

  7. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  8. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE PAGES

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    2016-11-08

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  9. Dynamical Model of Drug Accumulation in Bacteria: Sensitivity Analysis and Experimentally Testable Predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vesselinova, Neda; Alexandrov, Boian; Wall, Michael E.

    We present a dynamical model of drug accumulation in bacteria. The model captures key features in experimental time courses on ofloxacin accumulation: initial uptake; two-phase response; and long-term acclimation. In combination with experimental data, the model provides estimates of import and export rates in each phase, the time of entry into the second phase, and the decrease of internal drug during acclimation. Global sensitivity analysis, local sensitivity analysis, and Bayesian sensitivity analysis of the model provide information about the robustness of these estimates, and about the relative importance of different parameters in determining the features of the accumulation time coursesmore » in three different bacterial species: Escherichia coli, Staphylococcus aureus, and Pseudomonas aeruginosa. The results lead to experimentally testable predictions of the effects of membrane permeability, drug efflux and trapping (e.g., by DNA binding) on drug accumulation. A key prediction is that a sudden increase in ofloxacin accumulation in both E. coli and S. aureus is accompanied by a decrease in membrane permeability.« less

  10. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  11. Netlist Oriented Sensitivity Evaluation (NOSE)

    DTIC Science & Technology

    2017-03-01

    developing methodologies to assess sensitivities of alternative chip design netlist implementations. The research is somewhat foundational in that such...Netlist-Oriented Sensitivity Evaluation (NOSE) project was to develop methodologies to assess sensitivities of alternative chip design netlist...analysis to devise a methodology for scoring the sensitivity of circuit nodes in a netlist and thus providing the raw data for any meaningful

  12. Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing

    NASA Astrophysics Data System (ADS)

    Lin, Psang Dain; Lu, Chia-Hung

    2004-02-01

    Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.

  13. Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing

    NASA Astrophysics Data System (ADS)

    Dain Lin, Psang; Lu, Chia-Hung

    2004-02-01

    Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.

  14. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  15. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  16. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  17. Polarization sensitive spectroscopic optical coherence tomography for multimodal imaging

    NASA Astrophysics Data System (ADS)

    Strąkowski, Marcin R.; Kraszewski, Maciej; Strąkowska, Paulina; Trojanowski, Michał

    2015-03-01

    Optical coherence tomography (OCT) is a non-invasive method for 3D and cross-sectional imaging of biological and non-biological objects. The OCT measurements are provided in non-contact and absolutely safe way for the tested sample. Nowadays, the OCT is widely applied in medical diagnosis especially in ophthalmology, as well as dermatology, oncology and many more. Despite of great progress in OCT measurements there are still a vast number of issues like tissue recognition or imaging contrast enhancement that have not been solved yet. Here we are going to present the polarization sensitive spectroscopic OCT system (PS-SOCT). The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. Unlike standard polarization sensitive OCT the PS-SOCT delivers spectral information about measured quantities e.g. tested object birefringence changes over the light spectra. This solution overcomes the limits of polarization sensitive analysis applied in standard PS-OCT. Based on spectral data obtained from PS-SOCT the exact value of birefringence can be calculated even for the objects that provide higher order of retardation. In this contribution the benefits of using the combination of time-frequency and polarization sensitive analysis are being expressed. Moreover, the PS-SOCT system features, as well as OCT measurement examples are presented.

  18. Thermodynamic modeling of transcription: sensitivity analysis differentiates biological mechanism from mathematical model-induced effects.

    PubMed

    Dresch, Jacqueline M; Liu, Xiaozhou; Arnosti, David N; Ay, Ahmet

    2010-10-24

    Quantitative models of gene expression generate parameter values that can shed light on biological features such as transcription factor activity, cooperativity, and local effects of repressors. An important element in such investigations is sensitivity analysis, which determines how strongly a model's output reacts to variations in parameter values. Parameters of low sensitivity may not be accurately estimated, leading to unwarranted conclusions. Low sensitivity may reflect the nature of the biological data, or it may be a result of the model structure. Here, we focus on the analysis of thermodynamic models, which have been used extensively to analyze gene transcription. Extracted parameter values have been interpreted biologically, but until now little attention has been given to parameter sensitivity in this context. We apply local and global sensitivity analyses to two recent transcriptional models to determine the sensitivity of individual parameters. We show that in one case, values for repressor efficiencies are very sensitive, while values for protein cooperativities are not, and provide insights on why these differential sensitivities stem from both biological effects and the structure of the applied models. In a second case, we demonstrate that parameters that were thought to prove the system's dependence on activator-activator cooperativity are relatively insensitive. We show that there are numerous parameter sets that do not satisfy the relationships proferred as the optimal solutions, indicating that structural differences between the two types of transcriptional enhancers analyzed may not be as simple as altered activator cooperativity. Our results emphasize the need for sensitivity analysis to examine model construction and forms of biological data used for modeling transcriptional processes, in order to determine the significance of estimated parameter values for thermodynamic models. Knowledge of parameter sensitivities can provide the necessary context to determine how modeling results should be interpreted in biological systems.

  19. Revisiting inconsistency in large pharmacogenomic studies

    PubMed Central

    Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin

    2017-01-01

    In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933

  20. Constructing a High-Sensitivity, Computer-Interfaced, Differential Thermal Analysis Device for Teaching and Research

    ERIC Educational Resources Information Center

    Martinez, L. M.; Videa, M.; Mederos, F.; Mesquita, J.

    2007-01-01

    The construction of a new highly-sensitive, computer-interfaced, differential thermal analysis (DTA) device, used for gathering different information about the chemical reactions, is described. The instrument provides a better understanding about the phase transitions, phase diagrams and many more concepts to the students.

  1. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 1. Theory

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Computer simulation models are continually growing in complexity with increasingly more factors to be identified. Sensitivity Analysis (SA) provides an essential means for understanding the role and importance of these factors in producing model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to "variogram analysis," that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. Synthetic functions that resemble actual model response surfaces are used to illustrate the concepts, and show VARS to be as much as two orders of magnitude more computationally efficient than the state-of-the-art Sobol approach. In a companion paper, we propose a practical implementation strategy, and demonstrate the effectiveness, efficiency, and reliability (robustness) of the VARS framework on real-data case studies.

  2. Design component method for sensitivity analysis of built-up structures

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.; Seong, Hwai G.

    1986-01-01

    A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.

  3. Improving the sensitivity and accuracy of gamma activation analysis for the rapid determination of gold in mineral ores.

    PubMed

    Tickner, James; Ganly, Brianna; Lovric, Bojan; O'Dwyer, Joel

    2017-04-01

    Mining companies rely on chemical analysis methods to determine concentrations of gold in mineral ore samples. As gold is often mined commercially at concentrations around 1 part-per-million, it is necessary for any analysis method to provide good sensitivity as well as high absolute accuracy. We describe work to improve both the sensitivity and accuracy of the gamma activation analysis (GAA) method for gold. We present analysis results for several suites of ore samples and discuss the design of a GAA facility designed to replace conventional chemical assay in industrial applications. Copyright © 2017. Published by Elsevier Ltd.

  4. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  5. Factor structure and construct validity of the Anxiety Sensitivity Index among island Puerto Ricans.

    PubMed

    Cintrón, Jennifer A; Carter, Michele M; Suchday, Sonia; Sbrocco, Tracy; Gray, James

    2005-01-01

    The factor structure and convergent and discriminant validity of the Anxiety Sensitivity Index (ASI) were examined among a sample of 275 island Puerto Ricans. Results from a confirmatory factor analysis (CFA) comparing our data to factor solutions commonly reported as representative of European American and Spanish populations indicated a poor fit. A subsequent exploratory factor analysis (EFA) indicated that a two-factor solution (Factor 1, Anxiety Sensitivity; Factor 2, Emotional Concerns) provided the best fit. Correlations between the ASI and anxiety measures were moderately high providing evidence of convergent validity, while correlations between the ASI and BDI were significantly lower providing evidence of discriminant validity. Scores on all measures were positively correlated with acculturation, suggesting that those who ascribe to more traditional Hispanic culture report elevated anxiety.

  6. Requirements for Minimum Sample Size for Sensitivity and Specificity Analysis

    PubMed Central

    Adnan, Tassha Hilda

    2016-01-01

    Sensitivity and specificity analysis is commonly used for screening and diagnostic tests. The main issue researchers face is to determine the sufficient sample sizes that are related with screening and diagnostic studies. Although the formula for sample size calculation is available but concerning majority of the researchers are not mathematicians or statisticians, hence, sample size calculation might not be easy for them. This review paper provides sample size tables with regards to sensitivity and specificity analysis. These tables were derived from formulation of sensitivity and specificity test using Power Analysis and Sample Size (PASS) software based on desired type I error, power and effect size. The approaches on how to use the tables were also discussed. PMID:27891446

  7. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  8. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  9. Sensitivity Analysis of Fatigue Crack Growth Model for API Steels in Gaseous Hydrogen.

    PubMed

    Amaro, Robert L; Rustagi, Neha; Drexler, Elizabeth S; Slifka, Andrew J

    2014-01-01

    A model to predict fatigue crack growth of API pipeline steels in high pressure gaseous hydrogen has been developed and is presented elsewhere. The model currently has several parameters that must be calibrated for each pipeline steel of interest. This work provides a sensitivity analysis of the model parameters in order to provide (a) insight to the underlying mathematical and mechanistic aspects of the model, and (b) guidance for model calibration of other API steels.

  10. Quantitative mass spectrometry methods for pharmaceutical analysis

    PubMed Central

    Loos, Glenn; Van Schepdael, Ann

    2016-01-01

    Quantitative pharmaceutical analysis is nowadays frequently executed using mass spectrometry. Electrospray ionization coupled to a (hybrid) triple quadrupole mass spectrometer is generally used in combination with solid-phase extraction and liquid chromatography. Furthermore, isotopically labelled standards are often used to correct for ion suppression. The challenges in producing sensitive but reliable quantitative data depend on the instrumentation, sample preparation and hyphenated techniques. In this contribution, different approaches to enhance the ionization efficiencies using modified source geometries and improved ion guidance are provided. Furthermore, possibilities to minimize, assess and correct for matrix interferences caused by co-eluting substances are described. With the focus on pharmaceuticals in the environment and bioanalysis, different separation techniques, trends in liquid chromatography and sample preparation methods to minimize matrix effects and increase sensitivity are discussed. Although highly sensitive methods are generally aimed for to provide automated multi-residue analysis, (less sensitive) miniaturized set-ups have a great potential due to their ability for in-field usage. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644982

  11. Sensitivity Analysis in Sequential Decision Models.

    PubMed

    Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet

    2017-02-01

    Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.

  12. Manual versus Automated Rodent Behavioral Assessment: Comparing Efficacy and Ease of Bederson and Garcia Neurological Deficit Scores to an Open Field Video-Tracking System.

    PubMed

    Desland, Fiona A; Afzal, Aqeela; Warraich, Zuha; Mocco, J

    2014-01-01

    Animal models of stroke have been crucial in advancing our understanding of the pathophysiology of cerebral ischemia. Currently, the standards for determining neurological deficit in rodents are the Bederson and Garcia scales, manual assessments scoring animals based on parameters ranked on a narrow scale of severity. Automated open field analysis of a live-video tracking system that analyzes animal behavior may provide a more sensitive test. Results obtained from the manual Bederson and Garcia scales did not show significant differences between pre- and post-stroke animals in a small cohort. When using the same cohort, however, post-stroke data obtained from automated open field analysis showed significant differences in several parameters. Furthermore, large cohort analysis also demonstrated increased sensitivity with automated open field analysis versus the Bederson and Garcia scales. These early data indicate use of automated open field analysis software may provide a more sensitive assessment when compared to traditional Bederson and Garcia scales.

  13. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  14. The Volatility of Data Space: Topology Oriented Sensitivity Analysis

    PubMed Central

    Du, Jing; Ligmann-Zielinska, Arika

    2015-01-01

    Despite the difference among specific methods, existing Sensitivity Analysis (SA) technologies are all value-based, that is, the uncertainties in the model input and output are quantified as changes of values. This paradigm provides only limited insight into the nature of models and the modeled systems. In addition to the value of data, a potentially richer information about the model lies in the topological difference between pre-model data space and post-model data space. This paper introduces an innovative SA method called Topology Oriented Sensitivity Analysis, which defines sensitivity as the volatility of data space. It extends SA into a deeper level that lies in the topology of data. PMID:26368929

  15. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE PAGES

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra; ...

    2017-11-20

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  16. Analysis of the sensitivity properties of a model of vector-borne bubonic plague.

    PubMed

    Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald

    2008-09-06

    Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.

  17. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Katrina Eleanor; Urrego Blanco, Jorge Rolando; Jonko, Alexandra

    The Colorado River basin is a fundamentally important river for society, ecology and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model.more » Here, we combine global sensitivity analysis with a space-filling Latin Hypercube sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach.« less

  18. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  19. Imaging modalities for characterising focal pancreatic lesions.

    PubMed

    Best, Lawrence Mj; Rawji, Vishal; Pereira, Stephen P; Davidson, Brian R; Gurusamy, Kurinchi Selvan

    2017-04-17

    Increasing numbers of incidental pancreatic lesions are being detected each year. Accurate characterisation of pancreatic lesions into benign, precancerous, and cancer masses is crucial in deciding whether to use treatment or surveillance. Distinguishing benign lesions from precancerous and cancerous lesions can prevent patients from undergoing unnecessary major surgery. Despite the importance of accurately classifying pancreatic lesions, there is no clear algorithm for management of focal pancreatic lesions. To determine and compare the diagnostic accuracy of various imaging modalities in detecting cancerous and precancerous lesions in people with focal pancreatic lesions. We searched the CENTRAL, MEDLINE, Embase, and Science Citation Index until 19 July 2016. We searched the references of included studies to identify further studies. We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We planned to include studies reporting cross-sectional information on the index test (CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), EUS (endoscopic ultrasound), EUS elastography, and EUS-guided biopsy or FNA (fine-needle aspiration)) and reference standard (confirmation of the nature of the lesion was obtained by histopathological examination of the entire lesion by surgical excision, or histopathological examination for confirmation of precancer or cancer by biopsy and clinical follow-up of at least six months in people with negative index tests) in people with pancreatic lesions irrespective of language or publication status or whether the data were collected prospectively or retrospectively. Two review authors independently searched the references to identify relevant studies and extracted the data. We planned to use the bivariate analysis to calculate the summary sensitivity and specificity with their 95% confidence intervals and the hierarchical summary receiver operating characteristic (HSROC) to compare the tests and assess heterogeneity, but used simpler models (such as univariate random-effects model and univariate fixed-effect model) for combining studies when appropriate because of the sparse data. We were unable to compare the diagnostic performance of the tests using formal statistical methods because of sparse data. We included 54 studies involving a total of 3,196 participants evaluating the diagnostic accuracy of various index tests. In these 54 studies, eight different target conditions were identified with different final diagnoses constituting benign, precancerous, and cancerous lesions. None of the studies was of high methodological quality. None of the comparisons in which single studies were included was of sufficiently high methodological quality to warrant highlighting of the results. For differentiation of cancerous lesions from benign or precancerous lesions, we identified only one study per index test. The second analysis, of studies differentiating cancerous versus benign lesions, provided three tests in which meta-analysis could be performed. The sensitivities and specificities for diagnosing cancer were: EUS-FNA: sensitivity 0.79 (95% confidence interval (CI) 0.07 to 1.00), specificity 1.00 (95% CI 0.91 to 1.00); EUS: sensitivity 0.95 (95% CI 0.84 to 0.99), specificity 0.53 (95% CI 0.31 to 0.74); PET: sensitivity 0.92 (95% CI 0.80 to 0.97), specificity 0.65 (95% CI 0.39 to 0.84). The third analysis, of studies differentiating precancerous or cancerous lesions from benign lesions, only provided one test (EUS-FNA) in which meta-analysis was performed. EUS-FNA had moderate sensitivity for diagnosing precancerous or cancerous lesions (sensitivity 0.73 (95% CI 0.01 to 1.00) and high specificity 0.94 (95% CI 0.15 to 1.00), the extremely wide confidence intervals reflecting the heterogeneity between the studies). The fourth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (dysplasia) provided three tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing invasive carcinoma were: CT: sensitivity 0.72 (95% CI 0.50 to 0.87), specificity 0.92 (95% CI 0.81 to 0.97); EUS: sensitivity 0.78 (95% CI 0.44 to 0.94), specificity 0.91 (95% CI 0.61 to 0.98); EUS-FNA: sensitivity 0.66 (95% CI 0.03 to 0.99), specificity 0.92 (95% CI 0.73 to 0.98). The fifth analysis, of studies differentiating cancerous (high-grade dysplasia or invasive carcinoma) versus precancerous (low- or intermediate-grade dysplasia) provided six tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing cancer (high-grade dysplasia or invasive carcinoma) were: CT: sensitivity 0.87 (95% CI 0.00 to 1.00), specificity 0.96 (95% CI 0.00 to 1.00); EUS: sensitivity 0.86 (95% CI 0.74 to 0.92), specificity 0.91 (95% CI 0.83 to 0.96); EUS-FNA: sensitivity 0.47 (95% CI 0.24 to 0.70), specificity 0.91 (95% CI 0.32 to 1.00); EUS-FNA carcinoembryonic antigen 200 ng/mL: sensitivity 0.58 (95% CI 0.28 to 0.83), specificity 0.51 (95% CI 0.19 to 0.81); MRI: sensitivity 0.69 (95% CI 0.44 to 0.86), specificity 0.93 (95% CI 0.43 to 1.00); PET: sensitivity 0.90 (95% CI 0.79 to 0.96), specificity 0.94 (95% CI 0.81 to 0.99). The sixth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (low-grade dysplasia) provided no tests in which meta-analysis was performed. The seventh analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) provided two tests in which meta-analysis was performed. The sensitivity and specificity for diagnosing cancer were: CT: sensitivity 0.83 (95% CI 0.68 to 0.92), specificity 0.83 (95% CI 0.64 to 0.93) and MRI: sensitivity 0.80 (95% CI 0.58 to 0.92), specificity 0.81 (95% CI 0.53 to 0.95), respectively. The eighth analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) or benign lesions provided no test in which meta-analysis was performed.There were no major alterations in the subgroup analysis of cystic pancreatic focal lesions (42 studies; 2086 participants). None of the included studies evaluated EUS elastography or sequential testing. We were unable to arrive at any firm conclusions because of the differences in the way that study authors classified focal pancreatic lesions into cancerous, precancerous, and benign lesions; the inclusion of few studies with wide confidence intervals for each comparison; poor methodological quality in the studies; and heterogeneity in the estimates within comparisons.

  20. From web search to healthcare utilization: privacy-sensitive studies from mobile data.

    PubMed

    White, Ryen; Horvitz, Eric

    2013-01-01

    We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%-90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world.

  1. Rapid, sensitive and direct analysis of exopolysaccharides from biofilm on aluminum surfaces exposed to sea water using MALDI-TOF MS.

    PubMed

    Hasan, Nazim; Gopal, Judy; Wu, Hui-Fen

    2011-11-01

    Biofilm studies have extensive significance since their results can provide insights into the behavior of bacteria on material surfaces when exposed to natural water. This is the first attempt of using matrix-assisted laser desorption/ionization-mass spectrometry (MALDI-MS) for detecting the polysaccharides formed in a complex biofilm consisting of a mixed consortium of marine microbes. MALDI-MS has been applied to directly analyze exopolysaccharides (EPS) in the biofilm formed on aluminum surfaces exposed to seawater. The optimal conditions for MALDI-MS applied to EPS analysis of biofilm have been described. In addition, microbiologically influenced corrosion of aluminum exposed to sea water by a marine fungus was also observed and the fungus identity established using MALDI-MS analysis of EPS. Rapid, sensitive and direct MALDI-MS analysis on biofilm would dramatically speed up and provide new insights into biofilm studies due to its excellent advantages such as simplicity, high sensitivity, high selectivity and high speed. This study introduces a novel, fast, sensitive and selective platform for biofilm study from natural water without the need of tedious culturing steps or complicated sample pretreatment procedures. Copyright © 2011 John Wiley & Sons, Ltd.

  2. CORSSTOL: Cylinder Optimization of Rings, Skin, and Stringers with Tolerance sensitivity

    NASA Technical Reports Server (NTRS)

    Finckenor, J.; Bevill, M.

    1995-01-01

    Cylinder Optimization of Rings, Skin, and Stringers with Tolerance (CORSSTOL) sensitivity is a design optimization program incorporating a method to examine the effects of user-provided manufacturing tolerances on weight and failure. CORSSTOL gives designers a tool to determine tolerances based on need. This is a decisive way to choose the best design among several manufacturing methods with differing capabilities and costs. CORSSTOL initially optimizes a stringer-stiffened cylinder for weight without tolerances. The skin and stringer geometry are varied, subject to stress and buckling constraints. Then the same analysis and optimization routines are used to minimize the maximum material condition weight subject to the least favorable combination of tolerances. The adjusted optimum dimensions are provided with the weight and constraint sensitivities of each design variable. The designer can immediately identify critical tolerances. The safety of parts made out of tolerance can also be determined. During design and development of weight-critical systems, design/analysis tools that provide product-oriented results are of vital significance. The development of this program and methodology provides designers with an effective cost- and weight-saving design tool. The tolerance sensitivity method can be applied to any system defined by a set of deterministic equations.

  3. Chapter 5: Modulation Excitation Spectroscopy with Phase-Sensitive Detection for Surface Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulda, Sarah; Richards, Ryan M.

    Advancements in in situ spectroscopic techniques have led to significant progress being made in elucidating heterogeneous reaction mechanisms. The potential of these progressive methods is often limited only by the complexity of the system and noise in the data. Short-lived intermediates can be challenging, if not impossible, to identify with conventional spectra analysis means. Often equally difficult is separating signals that arise from active and inactive species. Modulation excitation spectroscopy combined with phase-sensitive detection analysis is a powerful tool for removing noise from the data while simultaneously revealing the underlying kinetics of the reaction. A stimulus is applied at amore » constant frequency to the reaction system, for example, a reactant cycled with an inert phase. Through mathematical manipulation of the data, any signal contributing to the overall spectra but not oscillating with the same frequency as the stimulus will be dampened or removed. With phase-sensitive detection, signals oscillating with the stimulus frequency but with various lag times are amplified providing valuable kinetic information. In this chapter, some examples are provided from the literature that have successfully used modulation excitation spectroscopy with phase-sensitive detection to uncover previously unobserved reaction intermediates and kinetics. Examples from a broad range of spectroscopic methods are included to provide perspective to the reader.« less

  4. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    PubMed

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  5. The Sensitivity of Precocious Child Writers: More Evidence of the Double-Edged Sword

    ERIC Educational Resources Information Center

    Edmunds, Alan L.; Edmunds, Gail

    2014-01-01

    This article provides further evidence of the often observed sensitive nature displayed by children who are gifted. It also addresses the positive and negative effects that this sensitivity can have on these individuals. Earlier, the authors explored this concept through an analysis of the works and life experiences of Geoffrey, aged 9, a prolific…

  6. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  7. Incorporating "motivation" into the functional analysis of challenging behavior: on the interactive and integrative potential of the motivating operation.

    PubMed

    Langthorne, Paul; McGill, Peter; O'Reilly, Mark

    2007-07-01

    Sensitivity theory attempts to account for the variability often observed in challenging behavior by recourse to the "aberrant motivation" of people with intellectual and developmental disabilities. In this article, we suggest that a functional analysis based on environmental (challenging environments) and biological (challenging needs) motivating operations provides a more parsimonious and empirically grounded account of challenging behavior than that proposed by sensitivity theory. It is argued that the concept of the motivating operation provides a means of integrating diverse strands of research without the undue inference of mentalistic constructs. An integrated model of challenging behavior is proposed, one that remains compatible with the central tenets of functional analysis.

  8. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  9. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  10. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  11. From web search to healthcare utilization: privacy-sensitive studies from mobile data

    PubMed Central

    Horvitz, Eric

    2013-01-01

    Objective We explore relationships between health information seeking activities and engagement with healthcare professionals via a privacy-sensitive analysis of geo-tagged data from mobile devices. Materials and methods We analyze logs of mobile interaction data stripped of individually identifiable information and location data. The data analyzed consist of time-stamped search queries and distances to medical care centers. We examine search activity that precedes the observation of salient evidence of healthcare utilization (EHU) (ie, data suggesting that the searcher is using healthcare resources), in our case taken as queries occurring at or near medical facilities. Results We show that the time between symptom searches and observation of salient evidence of seeking healthcare utilization depends on the acuity of symptoms. We construct statistical models that make predictions of forthcoming EHU based on observations about the current search session, prior medical search activities, and prior EHU. The predictive accuracy of the models varies (65%–90%) depending on the features used and the timeframe of the analysis, which we explore via a sensitivity analysis. Discussion We provide a privacy-sensitive analysis that can be used to generate insights about the pursuit of health information and healthcare. The findings demonstrate how large-scale studies of mobile devices can provide insights on how concerns about symptomatology lead to the pursuit of professional care. Conclusion We present new methods for the analysis of mobile logs and describe a study that provides evidence about how people transition from mobile searches on symptoms and diseases to the pursuit of healthcare in the world. PMID:22661560

  12. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  13. Ecological Sensitivity Evaluation of Tourist Region Based on Remote Sensing Image - Taking Chaohu Lake Area as a Case Study

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.

    2018-04-01

    Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.

  14. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  15. Do We Know Who Will Drop out?: A Review of the Predictors of Dropping out of High School--Precision, Sensitivity, and Specificity

    ERIC Educational Resources Information Center

    Bowers, Alex J.; Sprott, Ryan; Taff, Sherry A.

    2013-01-01

    The purpose of this study is to review the literature on the most accurate indicators of students at risk of dropping out of high school. We used Relative Operating Characteristic (ROC) analysis to compare the sensitivity and specificity of 110 dropout flags across 36 studies. Our results indicate that 1) ROC analysis provides a means to compare…

  16. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  17. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  18. The Efficacy of Guanxinning Injection in Treating Angina Pectoris: Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha

    2013-01-01

    Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167

  19. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  20. [Parameter sensitivity of simulating net primary productivity of Larix olgensis forest based on BIOME-BGC model].

    PubMed

    He, Li-hong; Wang, Hai-yan; Lei, Xiang-dong

    2016-02-01

    Model based on vegetation ecophysiological process contains many parameters, and reasonable parameter values will greatly improve simulation ability. Sensitivity analysis, as an important method to screen out the sensitive parameters, can comprehensively analyze how model parameters affect the simulation results. In this paper, we conducted parameter sensitivity analysis of BIOME-BGC model with a case study of simulating net primary productivity (NPP) of Larix olgensis forest in Wangqing, Jilin Province. First, with the contrastive analysis between field measurement data and the simulation results, we tested the BIOME-BGC model' s capability of simulating the NPP of L. olgensis forest. Then, Morris and EFAST sensitivity methods were used to screen the sensitive parameters that had strong influence on NPP. On this basis, we also quantitatively estimated the sensitivity of the screened parameters, and calculated the global, the first-order and the second-order sensitivity indices. The results showed that the BIOME-BGC model could well simulate the NPP of L. olgensis forest in the sample plot. The Morris sensitivity method provided a reliable parameter sensitivity analysis result under the condition of a relatively small sample size. The EFAST sensitivity method could quantitatively measure the impact of simulation result of a single parameter as well as the interaction between the parameters in BIOME-BGC model. The influential sensitive parameters for L. olgensis forest NPP were new stem carbon to new leaf carbon allocation and leaf carbon to nitrogen ratio, the effect of their interaction was significantly greater than the other parameter' teraction effect.

  1. Addressing Curse of Dimensionality in Sensitivity Analysis: How Can We Handle High-Dimensional Problems?

    NASA Astrophysics Data System (ADS)

    Safaei, S.; Haghnegahdar, A.; Razavi, S.

    2016-12-01

    Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.

  2. Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A

    2011-01-01

    The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less

  3. Motif-based analysis of large nucleotide data sets using MEME-ChIP

    PubMed Central

    Ma, Wenxiu; Noble, William S; Bailey, Timothy L

    2014-01-01

    MEME-ChIP is a web-based tool for analyzing motifs in large DNA or RNA data sets. It can analyze peak regions identified by ChIP-seq, cross-linking sites identified by cLIP-seq and related assays, as well as sets of genomic regions selected using other criteria. MEME-ChIP performs de novo motif discovery, motif enrichment analysis, motif location analysis and motif clustering, providing a comprehensive picture of the DNA or RNA motifs that are enriched in the input sequences. MEME-ChIP performs two complementary types of de novo motif discovery: weight matrix–based discovery for high accuracy; and word-based discovery for high sensitivity. Motif enrichment analysis using DNA or RNA motifs from human, mouse, worm, fly and other model organisms provides even greater sensitivity. MEME-ChIP’s interactive HTML output groups and aligns significant motifs to ease interpretation. this protocol takes less than 3 h, and it provides motif discovery approaches that are distinct and complementary to other online methods. PMID:24853928

  4. USEPA EXAMPLE EXIT LEVEL ANALYSIS RESULTS

    EPA Science Inventory

    Developed by NERL/ERD for the Office of Solid Waste, the enclosed product provides an example uncertainty analysis (UA) and initial process-based sensitivity analysis (SA) of hazardous waste "exit" concentrations for 7 chemicals and metals using the 3MRA Version 1.0 Modeling Syst...

  5. On understanding the relationship between structure in the potential surface and observables in classical dynamics: A functional sensitivity analysis approach

    NASA Astrophysics Data System (ADS)

    Judson, Richard S.; Rabitz, Herschel

    1987-04-01

    The relationship between structure in the potential surface and classical mechanical observables is examined by means of functional sensitivity analysis. Functional sensitivities provide maps of the potential surface, highlighting those regions that play the greatest role in determining the behavior of observables. A set of differential equations for the sensitivities of the trajectory components are derived. These are then solved using a Green's function method. It is found that the sensitivities become singular at the trajectory turning points with the singularities going as η-3/2, with η being the distance from the nearest turning point. The sensitivities are zero outside of the energetically and dynamically allowed region of phase space. A second set of equations is derived from which the sensitivities of observables can be directly calculated. An adjoint Green's function technique is employed, providing an efficient method for numerically calculating these quantities. Sensitivity maps are presented for a simple collinear atom-diatom inelastic scattering problem and for two Henon-Heiles type Hamiltonians modeling intramolecular processes. It is found that the positions of the trajectory caustics in the bound state problem determine regions of the highest potential surface sensitivities. In the scattering problem (which is impulsive, so that ``sticky'' collisions did not occur), the positions of the turning points of the individual trajectory components determine the regions of high sensitivity. In both cases, these lines of singularities are superimposed on a rich background structure. Most interesting is the appearance of classical interference effects. The interference features in the sensitivity maps occur most noticeably where two or more lines of turning points cross. The important practical motivation for calculating the sensitivities derives from the fact that the potential is a function, implying that any direct attempt to understand how local potential regions affect the behavior of the observables by repeatedly and systematically altering the potential will be prohibitively expensive. The functional sensitivity method enables one to perform this analysis at a fraction of the computational labor required for the direct method.

  6. A highly sensitive method for analysis of 7-dehydrocholesterol for the study of Smith-Lemli-Opitz syndrome[S

    PubMed Central

    Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.

    2014-01-01

    We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532

  7. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  8. PESTAN: Pesticide Analytical Model Version 4.0 User's Guide

    EPA Pesticide Factsheets

    The principal objective of this User's Guide to provide essential information on the aspects such as model conceptualization, model theory, assumptions and limitations, determination of input parameters, analysis of results and sensitivity analysis.

  9. A Personalized Approach of Patient-Health Care Provider Communication Regarding Colorectal Cancer Screening Options.

    PubMed

    Sava, M Gabriela; Dolan, James G; May, Jerrold H; Vargas, Luis G

    2018-07-01

    Current colorectal cancer screening guidelines by the US Preventive Services Task Force endorse multiple options for average-risk patients and recommend that screening choices should be guided by individual patient preferences. Implementing these recommendations in practice is challenging because they depend on accurate and efficient elicitation and assessment of preferences from patients who are facing a novel task. To present a methodology for analyzing the sensitivity and stability of a patient's preferences regarding colorectal cancer screening options and to provide a starting point for a personalized discussion between the patient and the health care provider about the selection of the appropriate screening option. This research is a secondary analysis of patient preference data collected as part of a previous study. We propose new measures of preference sensitivity and stability that can be used to determine if additional information provided would result in a change to the initially most preferred colorectal cancer screening option. Illustrative results of applying the methodology to the preferences of 2 patients, of different ages, are provided. The results show that different combinations of screening options are viable for each patient and that the health care provider should emphasize different information during the medical decision-making process. Sensitivity and stability analysis can supply health care providers with key topics to focus on when communicating with a patient and the degree of emphasis to place on each of them to accomplish specific goals. The insights provided by the analysis can be used by health care providers to approach communication with patients in a more personalized way, by taking into consideration patients' preferences before adding their own expertise to the discussion.

  10. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  11. Automated metal-free multiple-column nanoLC for improved phosphopeptide analysis sensitivity and throughput

    PubMed Central

    Zhao, Rui; Ding, Shi-Jian; Shen, Yufeng; Camp, David G.; Livesay, Eric A.; Udseth, Harold; Smith, Richard D.

    2009-01-01

    We report on the development and characterization of automated metal-free multiple-column nanoLC instrumentation for sensitive and high-throughput analysis of phosphopeptides with mass spectrometry analysis. The system implements a multiple-column capillary LC fluidic design developed for high-throughput analysis of peptides (Anal. Chem. 2001, 73, 3011–3021), incorporating modifications to achieve broad and sensitive analysis of phosphopeptides. The integrated nanoLC columns (50 µm i.d. × 30 cm containing 5 µm C18 particles) and the on-line solid phase extraction columns (150 µm i.d. × 4 cm containing 5 µm C18 particles) were connected to automatic switching valves with non-metal chromatographic accessories, and other modifications to avoid the exposure of the analyte to any metal surfaces during handling, separation, and electrospray ionization. The nanoLC developed provided a separation peak capacity of ∼250 for phosphopeptides (and ∼400 for normal peptides). A detection limit of 0.4 fmol was obtained when a linear ion trap tandem mass spectrometer (Finnegan LTQ) was coupled to a 50-µm i.d. column of the nanoLC. The separation power and sensitivity provided by the nanoLC-LTQ enabled identification of ∼4600 phosphopeptide candidates from ∼60 µg COS-7 cell tryptic digest followed by IMAC enrichment and ∼520 tyrosine phosphopeptides from ∼2 mg of human T cells digests followed by phosphotyrosine peptide immunoprecipitation. PMID:19217835

  12. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  13. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  14. Source apportionment and sensitivity analysis: two methodologies with two different purposes

    NASA Astrophysics Data System (ADS)

    Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe

    2017-11-01

    This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts (sensitivity analysis) and contributions (source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.

  15. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  16. Sensitivity study on durability variables of marine concrete structures

    NASA Astrophysics Data System (ADS)

    Zhou, Xin'gang; Li, Kefei

    2013-06-01

    In order to study the influence of parameters on durability of marine concrete structures, the parameter's sensitivity analysis was studied in this paper. With the Fick's 2nd law of diffusion and the deterministic sensitivity analysis method (DSA), the sensitivity factors of apparent surface chloride content, apparent chloride diffusion coefficient and its time dependent attenuation factor were analyzed. The results of the analysis show that the impact of design variables on concrete durability was different. The values of sensitivity factor of chloride diffusion coefficient and its time dependent attenuation factor were higher than others. Relative less error in chloride diffusion coefficient and its time dependent attenuation coefficient induces a bigger error in concrete durability design and life prediction. According to probability sensitivity analysis (PSA), the influence of mean value and variance of concrete durability design variables on the durability failure probability was studied. The results of the study provide quantitative measures of the importance of concrete durability design and life prediction variables. It was concluded that the chloride diffusion coefficient and its time dependent attenuation factor have more influence on the reliability of marine concrete structural durability. In durability design and life prediction of marine concrete structures, it was very important to reduce the measure and statistic error of durability design variables.

  17. Noninvasive and cost-effective trapping method for monitoring sensitive mammal populations

    Treesearch

    Stephanie E. Trapp; Elizabeth A. Flaherty

    2017-01-01

    Noninvasive sampling methods provide a means to monitor endangered, threatened, or sensitive species or populations while increasing the efficacy of personnel effort and time. We developed a monitoring protocol that utilizes single-capture hair snares and analysis of morphological features of hair for evaluating populations. During 2015, we used the West Virginia...

  18. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  19. Recent development of electrochemiluminescence sensors for food analysis.

    PubMed

    Hao, Nan; Wang, Kun

    2016-10-01

    Food quality and safety are closely related to human health. In the face of unceasing food safety incidents, various analytical techniques, such as mass spectrometry, chromatography, spectroscopy, and electrochemistry, have been applied in food analysis. High sensitivity usually requires expensive instruments and complicated procedures. Although these modern analytical techniques are sensitive enough to ensure food safety, sometimes their applications are limited because of the cost, usability, and speed of analysis. Electrochemiluminescence (ECL) is a powerful analytical technique that is attracting more and more attention because of its outstanding performance. In this review, the mechanisms of ECL and common ECL luminophores are briefly introduced. Then an overall review of the principles and applications of ECL sensors for food analysis is provided. ECL can be flexibly combined with various separation techniques. Novel materials (e.g., various nanomaterials) and strategies (e.g., immunoassay, aptasensors, and microfluidics) have been progressively introduced into the design of ECL sensors. By illustrating some selected representative works, we summarize the state of the art in the development of ECL sensors for toxins, heavy metals, pesticides, residual drugs, illegal additives, viruses, and bacterias. Compared with other methods, ECL can provide rapid, low-cost, and sensitive detection for various food contaminants in complex matrixes. However, there are also some limitations and challenges. Improvements suited to the characteristics of food analysis are still necessary.

  20. Hydrologic sensitivity of headwater catchments to climate and landscape variability

    NASA Astrophysics Data System (ADS)

    Kelleher, Christa; Wagener, Thorsten; McGlynn, Brian; Nippgen, Fabian; Jencso, Kelsey

    2013-04-01

    Headwater streams cumulatively represent an extensive portion of the United States stream network, yet remain largely unmonitored and unmapped. As such, we have limited understanding of how these systems will respond to change, knowledge that is important for preserving these unique ecosystems, the services they provide, and the biodiversity they support. We compare responses across five adjacent headwater catchments located in Tenderfoot Creek Experimental Forest in Montana, USA, to understand how local differences may affect the sensitivity of headwaters to change. We utilize global, variance-based sensitivity analysis to understand which aspects of the physical system (e.g., vegetation, topography, geology) control the variability in hydrologic behavior across these basins, and how this varies as a function of time (and therefore climate). Basin fluxes and storages, including evapotranspiration, snow water equivalent and melt, soil moisture and streamflow, are simulated using the Distributed Hydrology-Vegetation-Soil Model (DHSVM). Sensitivity analysis is applied to quantify the importance of different physical parameters to the spatial and temporal variability of different water balance components, allowing us to map similarities and differences in these controls through space and time. Our results show how catchment influences on fluxes vary across seasons (thus providing insight into transferability of knowledge in time), and how they vary across catchments with different physical characteristics (providing insight into transferability in space).

  1. Aerodynamic design optimization using sensitivity analysis and computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay; Eleshaky, Mohamed E.

    1991-01-01

    A new and efficient method is presented for aerodynamic design optimization, which is based on a computational fluid dynamics (CFD)-sensitivity analysis algorithm. The method is applied to design a scramjet-afterbody configuration for an optimized axial thrust. The Euler equations are solved for the inviscid analysis of the flow, which in turn provides the objective function and the constraints. The CFD analysis is then coupled with the optimization procedure that uses a constrained minimization method. The sensitivity coefficients, i.e. gradients of the objective function and the constraints, needed for the optimization are obtained using a quasi-analytical method rather than the traditional brute force method of finite difference approximations. During the one-dimensional search of the optimization procedure, an approximate flow analysis (predicted flow) based on a first-order Taylor series expansion is used to reduce the computational cost. Finally, the sensitivity of the optimum objective function to various design parameters, which are kept constant during the optimization, is computed to predict new optimum solutions. The flow analysis of the demonstrative example are compared with the experimental data. It is shown that the method is more efficient than the traditional methods.

  2. Sensitivity analysis in practice: providing an uncertainty budget when applying supplement 1 to the GUM

    NASA Astrophysics Data System (ADS)

    Allard, Alexandre; Fischer, Nicolas

    2018-06-01

    Sensitivity analysis associated with the evaluation of measurement uncertainty is a very important tool for the metrologist, enabling them to provide an uncertainty budget and to gain a better understanding of the measurand and the underlying measurement process. Using the GUM uncertainty framework, the contribution of an input quantity to the variance of the output quantity is obtained through so-called ‘sensitivity coefficients’. In contrast, such coefficients are no longer computed in cases where a Monte-Carlo method is used. In such a case, supplement 1 to the GUM suggests varying the input quantities one at a time, which is not an efficient method and may provide incorrect contributions to the variance in cases where significant interactions arise. This paper proposes different methods for the elaboration of the uncertainty budget associated with a Monte Carlo method. An application to the mass calibration example described in supplement 1 to the GUM is performed with the corresponding R code for implementation. Finally, guidance is given for choosing a method, including suggestions for a future revision of supplement 1 to the GUM.

  3. Prostate Cancer Information Available in Health-Care Provider Offices: An Analysis of Content, Readability, and Cultural Sensitivity.

    PubMed

    Choi, Seul Ki; Seel, Jessica S; Yelton, Brooks; Steck, Susan E; McCormick, Douglas P; Payne, Johnny; Minter, Anthony; Deutchki, Elizabeth K; Hébert, James R; Friedman, Daniela B

    2018-07-01

    Prostate cancer (PrCA) is the most common cancer affecting men in the United States, and African American men have the highest incidence among men in the United States. Little is known about the PrCA-related educational materials being provided to patients in health-care settings. Content, readability, and cultural sensitivity of materials available in providers' practices in South Carolina were examined. A total of 44 educational materials about PrCA and associated sexual dysfunction was collected from 16 general and specialty practices. The content of the materials was coded, and cultural sensitivity was assessed using the Cultural Sensitivity Assessment Tool. Flesch Reading Ease, Flesch-Kincaid Grade Level, and the Simple Measure of Gobbledygook were used to assess readability. Communication with health-care providers (52.3%), side effects of PrCA treatment (40.9%), sexual dysfunction and its treatment (38.6%), and treatment options (34.1%) were frequently presented. All materials had acceptable cultural sensitivity scores; however, 2.3% and 15.9% of materials demonstrated unacceptable cultural sensitivity regarding format and visual messages, respectively. Readability of the materials varied. More than half of the materials were written above a high-school reading level. PrCA-related materials available in health-care practices may not meet patients' needs regarding content, cultural sensitivity, and readability. A wide range of educational materials that address various aspects of PrCA, including treatment options and side effects, should be presented in plain language and be culturally sensitive.

  4. Interactive Controls Analysis (INCA)

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.

    1989-01-01

    Version 3.12 of INCA provides user-friendly environment for design and analysis of linear control systems. System configuration and parameters easily adjusted, enabling INCA user to create compensation networks and perform sensitivity analysis in convenient manner. Full complement of graphical routines makes output easy to understand. Written in Pascal and FORTRAN.

  5. The Effects of Variability and Risk in Selection Utility Analysis: An Empirical Comparison.

    ERIC Educational Resources Information Center

    Rich, Joseph R.; Boudreau, John W.

    1987-01-01

    Investigated utility estimate variability for the selection utility of using the Programmer Aptitude Test to select computer programmers. Comparison of Monte Carlo results to other risk assessment approaches (sensitivity analysis, break-even analysis, algebraic derivation of the distribtion) suggests that distribution information provided by Monte…

  6. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  7. Analytic Closed-Form Solution of a Mixed Layer Model for Stratocumulus Clouds

    NASA Astrophysics Data System (ADS)

    Akyurek, Bengu Ozge

    Stratocumulus clouds play an important role in climate cooling and are hard to predict using global climate and weather forecast models. Thus, previous studies in the literature use observations and numerical simulation tools, such as large-eddy simulation (LES), to solve the governing equations for the evolution of stratocumulus clouds. In contrast to the previous works, this work provides an analytic closed-form solution to the cloud thickness evolution of stratocumulus clouds in a mixed-layer model framework. With a focus on application over coastal lands, the diurnal cycle of cloud thickness and whether or not clouds dissipate are of particular interest. An analytic solution enables the sensitivity analysis of implicitly interdependent variables and extrema analysis of cloud variables that are hard to achieve using numerical solutions. In this work, the sensitivity of inversion height, cloud-base height, and cloud thickness with respect to initial and boundary conditions, such as Bowen ratio, subsidence, surface temperature, and initial inversion height, are studied. A critical initial cloud thickness value that can be dissipated pre- and post-sunrise is provided. Furthermore, an extrema analysis is provided to obtain the minima and maxima of the inversion height and cloud thickness within 24 h. The proposed solution is validated against LES results under the same initial and boundary conditions. Then, the proposed analytic framework is extended to incorporate multiple vertical columns that are coupled by advection through wind flow. This enables a bridge between the micro-scale and the mesoscale relations. The effect of advection on cloud evolution is studied and a sensitivity analysis is provided.

  8. A generalized matching law analysis of cocaine vs. food choice in rhesus monkeys: effects of candidate 'agonist-based' medications on sensitivity to reinforcement.

    PubMed

    Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L

    2015-01-01

    We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  10. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  11. Improvement of the cost-benefit analysis algorithm for high-rise construction projects

    NASA Astrophysics Data System (ADS)

    Gafurov, Andrey; Skotarenko, Oksana; Plotnikov, Vladimir

    2018-03-01

    The specific nature of high-rise investment projects entailing long-term construction, high risks, etc. implies a need to improve the standard algorithm of cost-benefit analysis. An improved algorithm is described in the article. For development of the improved algorithm of cost-benefit analysis for high-rise construction projects, the following methods were used: weighted average cost of capital, dynamic cost-benefit analysis of investment projects, risk mapping, scenario analysis, sensitivity analysis of critical ratios, etc. This comprehensive approach helped to adapt the original algorithm to feasibility objectives in high-rise construction. The authors put together the algorithm of cost-benefit analysis for high-rise construction projects on the basis of risk mapping and sensitivity analysis of critical ratios. The suggested project risk management algorithms greatly expand the standard algorithm of cost-benefit analysis in investment projects, namely: the "Project analysis scenario" flowchart, improving quality and reliability of forecasting reports in investment projects; the main stages of cash flow adjustment based on risk mapping for better cost-benefit project analysis provided the broad range of risks in high-rise construction; analysis of dynamic cost-benefit values considering project sensitivity to crucial variables, improving flexibility in implementation of high-rise projects.

  12. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  13. Sensitivity Analysis of Launch Vehicle Debris Risk Model

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Lawrence, Scott L.

    2010-01-01

    As part of an analysis of the loss of crew risk associated with an ascent abort system for a manned launch vehicle, a model was developed to predict the impact risk of the debris resulting from an explosion of the launch vehicle on the crew module. The model consisted of a debris catalog describing the number, size and imparted velocity of each piece of debris, a method to compute the trajectories of the debris and a method to calculate the impact risk given the abort trajectory of the crew module. The model provided a point estimate of the strike probability as a function of the debris catalog, the time of abort and the delay time between the abort and destruction of the launch vehicle. A study was conducted to determine the sensitivity of the strike probability to the various model input parameters and to develop a response surface model for use in the sensitivity analysis of the overall ascent abort risk model. The results of the sensitivity analysis and the response surface model are presented in this paper.

  14. Conversion of paper sludge to ethanol, II: process design and economic analysis.

    PubMed

    Fan, Zhiliang; Lynd, Lee R

    2007-01-01

    Process design and economics are considered for conversion of paper sludge to ethanol. A particular site, a bleached kraft mill operated in Gorham, NH by Fraser Papers (15 tons dry sludge processed per day), is considered. In addition, profitability is examined for a larger plant (50 dry tons per day) and sensitivity analysis is carried out with respect to capacity, tipping fee, and ethanol price. Conversion based on simultaneous saccharification and fermentation with intermittent feeding is examined, with ethanol recovery provided by distillation and molecular sieve adsorption. It was found that the Fraser plant achieves positive cash flow with or without xylose conversion and mineral recovery. Sensitivity analysis indicates economics are very sensitive to ethanol selling price and scale; significant but less sensitive to the tipping fee, and rather insensitive to the prices of cellulase and power. Internal rates of return exceeding 15% are projected for larger plants at most combinations of scale, tipping fee, and ethanol price. Our analysis lends support to the proposition that paper sludge is a leading point-of-entry and proving ground for emergent industrial processes featuring enzymatic hydrolysis of cellulosic biomass.

  15. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  16. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  17. Fragment-based prediction of skin sensitization using recursive partitioning

    NASA Astrophysics Data System (ADS)

    Lu, Jing; Zheng, Mingyue; Wang, Yong; Shen, Qiancheng; Luo, Xiaomin; Jiang, Hualiang; Chen, Kaixian

    2011-09-01

    Skin sensitization is an important toxic endpoint in the risk assessment of chemicals. In this paper, structure-activity relationships analysis was performed on the skin sensitization potential of 357 compounds with local lymph node assay data. Structural fragments were extracted by GASTON (GrAph/Sequence/Tree extractiON) from the training set. Eight fragments with accuracy significantly higher than 0.73 ( p < 0.1) were retained to make up an indicator descriptor fragment. The fragment descriptor and eight other physicochemical descriptors closely related to the endpoint were calculated to construct the recursive partitioning tree (RP tree) for classification. The balanced accuracy of the training set, test set I, and test set II in the leave-one-out model were 0.846, 0.800, and 0.809, respectively. The results highlight that fragment-based RP tree is a preferable method for identifying skin sensitizers. Moreover, the selected fragments provide useful structural information for exploring sensitization mechanisms, and RP tree creates a graphic tree to identify the most important properties associated with skin sensitization. They can provide some guidance for designing of drugs with lower sensitization level.

  18. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  19. Probabilistic sensitivity analysis for decision trees with multiple branches: use of the Dirichlet distribution in a Bayesian framework.

    PubMed

    Briggs, Andrew H; Ades, A E; Price, Martin J

    2003-01-01

    In structuring decision models of medical interventions, it is commonly recommended that only 2 branches be used for each chance node to avoid logical inconsistencies that can arise during sensitivity analyses if the branching probabilities do not sum to 1. However, information may be naturally available in an unconditional form, and structuring a tree in conditional form may complicate rather than simplify the sensitivity analysis of the unconditional probabilities. Current guidance emphasizes using probabilistic sensitivity analysis, and a method is required to provide probabilistic probabilities over multiple branches that appropriately represents uncertainty while satisfying the requirement that mutually exclusive event probabilities should sum to 1. The authors argue that the Dirichlet distribution, the multivariate equivalent of the beta distribution, is appropriate for this purpose and illustrate its use for generating a fully probabilistic transition matrix for a Markov model. Furthermore, they demonstrate that by adopting a Bayesian approach, the problem of observing zero counts for transitions of interest can be overcome.

  20. Failure Bounding And Sensitivity Analysis Applied To Monte Carlo Entry, Descent, And Landing Simulations

    NASA Technical Reports Server (NTRS)

    Gaebler, John A.; Tolson, Robert H.

    2010-01-01

    In the study of entry, descent, and landing, Monte Carlo sampling methods are often employed to study the uncertainty in the designed trajectory. The large number of uncertain inputs and outputs, coupled with complicated non-linear models, can make interpretation of the results difficult. Three methods that provide statistical insights are applied to an entry, descent, and landing simulation. The advantages and disadvantages of each method are discussed in terms of the insights gained versus the computational cost. The first method investigated was failure domain bounding which aims to reduce the computational cost of assessing the failure probability. Next a variance-based sensitivity analysis was studied for the ability to identify which input variable uncertainty has the greatest impact on the uncertainty of an output. Finally, probabilistic sensitivity analysis is used to calculate certain sensitivities at a reduced computational cost. These methods produce valuable information that identifies critical mission parameters and needs for new technology, but generally at a significant computational cost.

  1. Global Sensitivity of Simulated Water Balance Indicators Under Future Climate Change in the Colorado Basin

    NASA Astrophysics Data System (ADS)

    Bennett, Katrina E.; Urrego Blanco, Jorge R.; Jonko, Alexandra; Bohn, Theodore J.; Atchley, Adam L.; Urban, Nathan M.; Middleton, Richard S.

    2018-01-01

    The Colorado River Basin is a fundamentally important river for society, ecology, and energy in the United States. Streamflow estimates are often provided using modeling tools which rely on uncertain parameters; sensitivity analysis can help determine which parameters impact model results. Despite the fact that simulated flows respond to changing climate and vegetation in the basin, parameter sensitivity of the simulations under climate change has rarely been considered. In this study, we conduct a global sensitivity analysis to relate changes in runoff, evapotranspiration, snow water equivalent, and soil moisture to model parameters in the Variable Infiltration Capacity (VIC) hydrologic model. We combine global sensitivity analysis with a space-filling Latin Hypercube Sampling of the model parameter space and statistical emulation of the VIC model to examine sensitivities to uncertainties in 46 model parameters following a variance-based approach. We find that snow-dominated regions are much more sensitive to uncertainties in VIC parameters. Although baseflow and runoff changes respond to parameters used in previous sensitivity studies, we discover new key parameter sensitivities. For instance, changes in runoff and evapotranspiration are sensitive to albedo, while changes in snow water equivalent are sensitive to canopy fraction and Leaf Area Index (LAI) in the VIC model. It is critical for improved modeling to narrow uncertainty in these parameters through improved observations and field studies. This is important because LAI and albedo are anticipated to change under future climate and narrowing uncertainty is paramount to advance our application of models such as VIC for water resource management.

  2. Prospective identification of adolescent suicide ideation using classification tree analysis: Models for community-based screening.

    PubMed

    Hill, Ryan M; Oosterhoff, Benjamin; Kaplow, Julie B

    2017-07-01

    Although a large number of risk markers for suicide ideation have been identified, little guidance has been provided to prospectively identify adolescents at risk for suicide ideation within community settings. The current study addressed this gap in the literature by utilizing classification tree analysis (CTA) to provide a decision-making model for screening adolescents at risk for suicide ideation. Participants were N = 4,799 youth (Mage = 16.15 years, SD = 1.63) who completed both Waves 1 and 2 of the National Longitudinal Study of Adolescent to Adult Health. CTA was used to generate a series of decision rules for identifying adolescents at risk for reporting suicide ideation at Wave 2. Findings revealed 3 distinct solutions with varying sensitivity and specificity for identifying adolescents who reported suicide ideation. Sensitivity of the classification trees ranged from 44.6% to 77.6%. The tree with greatest specificity and lowest sensitivity was based on a history of suicide ideation. The tree with moderate sensitivity and high specificity was based on depressive symptoms, suicide attempts or suicide among family and friends, and social support. The most sensitive but least specific tree utilized these factors and gender, ethnicity, hours of sleep, school-related factors, and future orientation. These classification trees offer community organizations options for instituting large-scale screenings for suicide ideation risk depending on the available resources and modality of services to be provided. This study provides a theoretically and empirically driven model for prospectively identifying adolescents at risk for suicide ideation and has implications for preventive interventions among at-risk youth. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Phosphorus Determination by Derivative Activation Analysis: A Multifaceted Radiochemical Application.

    ERIC Educational Resources Information Center

    Kleppinger, E. W.; And Others

    1984-01-01

    Although determination of phosphorus is important in biology, physiology, and environmental science, traditional gravimetric and colorimetric methods are cumbersome and lack the requisite sensitivity. Therefore, a derivative activation analysis method is suggested. Background information, procedures, and results are provided. (JN)

  4. Separation and Analysis of Citral Isomers.

    ERIC Educational Resources Information Center

    Sacks, Jeff; And Others

    1983-01-01

    Provides background information, procedures, and results of an experiments designed to introduce undergraduates to the technique of steam distillation as a means of isolating thermally sensitive compounds. Chromatographic techniques (HPLC) and mass spectrometric analysis are used in the experiment which requires three laboratory periods. (JN)

  5. Sensitivity and specificity of the Streptococcus pneumoniae urinary antigen test for unconcentrated urine from adult patients with pneumonia: a meta-analysis.

    PubMed

    Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi

    2013-11-01

    Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.

  6. Value for money? Array genomic hybridization for diagnostic testing for genetic causes of intellectual disability.

    PubMed

    Regier, Dean A; Friedman, Jan M; Marra, Carlo A

    2010-05-14

    Array genomic hybridization (AGH) provides a higher detection rate than does conventional cytogenetic testing when searching for chromosomal imbalance causing intellectual disability (ID). AGH is more costly than conventional cytogenetic testing, and it remains unclear whether AGH provides good value for money. Decision analytic modeling was used to evaluate the trade-off between costs, clinical effectiveness, and benefit of an AGH testing strategy compared to a conventional testing strategy. The trade-off between cost and effectiveness was expressed via the incremental cost-effectiveness ratio. Probabilistic sensitivity analysis was performed via Monte Carlo simulation. The baseline AGH testing strategy led to an average cost increase of $217 (95% CI $172-$261) per patient and an additional 8.2 diagnoses in every 100 tested (0.082; 95% CI 0.044-0.119). The mean incremental cost per additional diagnosis was $2646 (95% CI $1619-$5296). Probabilistic sensitivity analysis demonstrated that there was a 95% probability that AGH would be cost effective if decision makers were willing to pay $4550 for an additional diagnosis. Our model suggests that using AGH instead of conventional karyotyping for most ID patients provides good value for money. Deterministic sensitivity analysis found that employing AGH after first-line cytogenetic testing had proven uninformative did not provide good value for money when compared to using AGH as first-line testing. Copyright (c) 2010 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  7. Spatial Analysis for Monitoring Forest Health

    Treesearch

    Francis A. Roesch

    1994-01-01

    A plan for the spatial analysis for the sample design for the detection monitoring phase in the joint USDA Forest Service/EPA Forest Health Monitoring Program (FHM) in the United States is discussed. The spatial analysis procedure is intended to more quickly identify changes in forest health by providing increased sensitivity to localized changes. The procedure is...

  8. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  9. Optimal cure cycle design for autoclave processing of thick composites laminates: A feasibility study

    NASA Technical Reports Server (NTRS)

    Hou, Jean W.

    1985-01-01

    The thermal analysis and the calculation of thermal sensitivity of a cure cycle in autoclave processing of thick composite laminates were studied. A finite element program for the thermal analysis and design derivatives calculation for temperature distribution and the degree of cure was developed and verified. It was found that the direct differentiation was the best approach for the thermal design sensitivity analysis. In addition, the approach of the direct differentiation provided time histories of design derivatives which are of great value to the cure cycle designers. The approach of direct differentiation is to be used for further study, i.e., the optimal cycle design.

  10. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. A Survey of Shape Parameterization Techniques

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    1999-01-01

    This paper provides a survey of shape parameterization techniques for multidisciplinary optimization and highlights some emerging ideas. The survey focuses on the suitability of available techniques for complex configurations, with suitability criteria based on the efficiency, effectiveness, ease of implementation, and availability of analytical sensitivities for geometry and grids. The paper also contains a section on field grid regeneration, grid deformation, and sensitivity analysis techniques.

  12. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  13. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    NASA Astrophysics Data System (ADS)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  14. Improved numerical solutions for chaotic-cancer-model

    NASA Astrophysics Data System (ADS)

    Yasir, Muhammad; Ahmad, Salman; Ahmed, Faizan; Aqeel, Muhammad; Akbar, Muhammad Zubair

    2017-01-01

    In biological sciences, dynamical system of cancer model is well known due to its sensitivity and chaoticity. Present work provides detailed computational study of cancer model by counterbalancing its sensitive dependency on initial conditions and parameter values. Cancer chaotic model is discretized into a system of nonlinear equations that are solved using the well-known Successive-Over-Relaxation (SOR) method with a proven convergence. This technique enables to solve large systems and provides more accurate approximation which is illustrated through tables, time history maps and phase portraits with detailed analysis.

  15. Accuracy of un-supervised versus provider-supervised self-administered HIV testing in Uganda: A randomized implementation trial.

    PubMed

    Asiimwe, Stephen; Oloya, James; Song, Xiao; Whalen, Christopher C

    2014-12-01

    Unsupervised HIV self-testing (HST) has potential to increase knowledge of HIV status; however, its accuracy is unknown. To estimate the accuracy of unsupervised HST in field settings in Uganda, we performed a non-blinded, randomized controlled, non-inferiority trial of unsupervised compared with supervised HST among selected high HIV risk fisherfolk (22.1 % HIV Prevalence) in three fishing villages in Uganda between July and September 2013. The study enrolled 246 participants and randomized them in a 1:1 ratio to unsupervised HST or provider-supervised HST. In an intent-to-treat analysis, the HST sensitivity was 90 % in the unsupervised arm and 100 % among the provider-supervised, yielding a difference 0f -10 % (90 % CI -21, 1 %); non-inferiority was not shown. In a per protocol analysis, the difference in sensitivity was -5.6 % (90 % CI -14.4, 3.3 %) and did show non-inferiority. We conclude that unsupervised HST is feasible in rural Africa and may be non-inferior to provider-supervised HST.

  16. Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography: Environmental Analysis

    DTIC Science & Technology

    2006-01-01

    ENVIRONMENTAL ANALYSIS Analysis of Explosives in Soil Using Solid Phase Microextraction and Gas Chromatography Howard T. Mayfield Air Force Research...Abstract: Current methods for the analysis of explosives in soils utilize time consuming sample preparation workups and extractions. The method detection...chromatography/mass spectrometry to provide a con- venient and sensitive analysis method for explosives in soil. Keywords: Explosives, TNT, solid phase

  17. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  18. North Atlantic storm driving of extreme wave heights in the North Sea

    NASA Astrophysics Data System (ADS)

    Bell, R. J.; Gray, S. L.; Jones, O. P.

    2017-04-01

    The relationship between storms and extreme ocean waves in the North Sea is assessed using a long-period wave data set and storms identified in the Interim ECMWF Re-Analysis (ERA-Interim). An ensemble sensitivity analysis is used to provide information on the spatial and temporal forcing from mean sea-level pressure and surface wind associated with extreme ocean wave height responses. Extreme ocean waves in the central North Sea arise due to intense extratropical cyclone winds from either the cold conveyor belt (northerly-wind events) or the warm conveyor belt (southerly-wind events). The largest wave heights are associated with northerly-wind events which tend to have stronger wind speeds and occur as the cold conveyor belt wraps rearward round the cyclone to the cold side of the warm front. The northerly-wind events provide a larger fetch to the central North Sea to aid wave growth. Southerly-wind events are associated with the warm conveyor belts of intense extratropical cyclones that develop in the left upper tropospheric jet exit region. Ensemble sensitivity analysis can provide early warning of extreme wave events by demonstrating a relationship between wave height and high pressure to the west of the British Isles for northerly-wind events 48 h prior. Southerly-wind extreme events demonstrate sensitivity to low pressure to the west of the British Isles 36 h prior.

  19. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  20. Use of SUSA in Uncertainty and Sensitivity Analysis for INL VHTR Coupled Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerhard Strydom

    2010-06-01

    The need for a defendable and systematic Uncertainty and Sensitivity approach that conforms to the Code Scaling, Applicability, and Uncertainty (CSAU) process, and that could be used for a wide variety of software codes, was defined in 2008.The GRS (Gesellschaft für Anlagen und Reaktorsicherheit) company of Germany has developed one type of CSAU approach that is particularly well suited for legacy coupled core analysis codes, and a trial version of their commercial software product SUSA (Software for Uncertainty and Sensitivity Analyses) was acquired on May 12, 2010. This interim milestone report provides an overview of the current status of themore » implementation and testing of SUSA at the INL VHTR Project Office.« less

  1. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  2. Sensitivity of the reference evapotranspiration to key climatic variables during the growing season in the Ejina oasis northwest China.

    PubMed

    Hou, Lan-Gong; Zou, Song-Bing; Xiao, Hong-Lang; Yang, Yong-Gang

    2013-01-01

    The standardized FAO56 Penman-Monteith model, which has been the most reasonable method in both humid and arid climatic conditions, provides reference evapotranspiration (ETo) estimates for planning and efficient use of agricultural water resources. And sensitivity analysis is important in understanding the relative importance of climatic variables to the variation of reference evapotranspiration. In this study, a non-dimensional relative sensitivity coefficient was employed to predict responses of ETo to perturbations of four climatic variables in the Ejina oasis northwest China. A 20-year historical dataset of daily air temperature, wind speed, relative humidity and daily sunshine duration in the Ejina oasis was used in the analysis. Results have shown that daily sensitivity coefficients exhibited large fluctuations during the growing season, and shortwave radiation was the most sensitive variable in general for the Ejina oasis, followed by air temperature, wind speed and relative humidity. According to this study, the response of ETo can be preferably predicted under perturbation of air temperature, wind speed, relative humidity and shortwave radiation by their sensitivity coefficients.

  3. Adjoint-based sensitivity analysis of low-order thermoacoustic networks using a wave-based approach

    NASA Astrophysics Data System (ADS)

    Aguilar, José G.; Magri, Luca; Juniper, Matthew P.

    2017-07-01

    Strict pollutant emission regulations are pushing gas turbine manufacturers to develop devices that operate in lean conditions, with the downside that combustion instabilities are more likely to occur. Methods to predict and control unstable modes inside combustion chambers have been developed in the last decades but, in some cases, they are computationally expensive. Sensitivity analysis aided by adjoint methods provides valuable sensitivity information at a low computational cost. This paper introduces adjoint methods and their application in wave-based low order network models, which are used as industrial tools, to predict and control thermoacoustic oscillations. Two thermoacoustic models of interest are analyzed. First, in the zero Mach number limit, a nonlinear eigenvalue problem is derived, and continuous and discrete adjoint methods are used to obtain the sensitivities of the system to small modifications. Sensitivities to base-state modification and feedback devices are presented. Second, a more general case with non-zero Mach number, a moving flame front and choked outlet, is presented. The influence of the entropy waves on the computed sensitivities is shown.

  4. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model

    PubMed Central

    2010-01-01

    Background Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. Methods A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age ≥55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. Results The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). Conclusions This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials. PMID:20433705

  5. Modeling screening, prevention, and delaying of Alzheimer's disease: an early-stage decision analytic model.

    PubMed

    Furiak, Nicolas M; Klein, Robert W; Kahle-Wrobleski, Kristin; Siemers, Eric R; Sarpong, Eric; Klein, Timothy M

    2010-04-30

    Alzheimer's Disease (AD) affects a growing proportion of the population each year. Novel therapies on the horizon may slow the progress of AD symptoms and avoid cases altogether. Initiating treatment for the underlying pathology of AD would ideally be based on biomarker screening tools identifying pre-symptomatic individuals. Early-stage modeling provides estimates of potential outcomes and informs policy development. A time-to-event (TTE) simulation provided estimates of screening asymptomatic patients in the general population age > or =55 and treatment impact on the number of patients reaching AD. Patients were followed from AD screen until all-cause death. Baseline sensitivity and specificity were 0.87 and 0.78, with treatment on positive screen. Treatment slowed progression by 50%. Events were scheduled using literature-based age-dependent incidences of AD and death. The base case results indicated increased AD free years (AD-FYs) through delays in onset and a reduction of 20 AD cases per 1000 screened individuals. Patients completely avoiding AD accounted for 61% of the incremental AD-FYs gained. Total years of treatment per 1000 screened patients was 2,611. The number-needed-to-screen was 51 and the number-needed-to-treat was 12 to avoid one case of AD. One-way sensitivity analysis indicated that duration of screening sensitivity and rescreen interval impact AD-FYs the most. A two-way sensitivity analysis found that for a test with an extended duration of sensitivity (15 years) the number of AD cases avoided was 6,000-7,000 cases for a test with higher sensitivity and specificity (0.90,0.90). This study yielded valuable parameter range estimates at an early stage in the study of screening for AD. Analysis identified duration of screening sensitivity as a key variable that may be unavailable from clinical trials.

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  7. Sensitivity Analysis to Turbulent Combustion Models for Combustor-Turbine Interactions

    NASA Astrophysics Data System (ADS)

    Miki, Kenji; Moder, Jeff; Liou, Meng-Sing

    2017-11-01

    The recently-updated Open National CombustionCode (Open NCC) equipped with alarge-eddy simulation (LES) is applied to model the flow field inside the Energy Efficient Engine (EEE) in conjunction with sensitivity analysis to turbulent combustion models. In this study, we consider three different turbulence-combustion interaction models, the Eddy-Breakup model (EBU), the Linear-Eddy Model (LEM) and the Probability Density Function (PDF)model as well as the laminar chemistry model. Acomprehensive comparison of the flow field and the flame structure will be provided. One of our main interests isto understand how a different model predicts thermal variation on the surface of the first stage vane. Considering that these models are often used in combustor/turbine communities, this study should provide some guidelines on numerical modeling of combustor-turbine interactions.

  8. Sensitivity of GC-EI/MS, GC-EI/MS/MS, LC-ESI/MS/MS, LC-Ag(+) CIS/MS/MS, and GC-ESI/MS/MS for analysis of anabolic steroids in doping control.

    PubMed

    Cha, Eunju; Kim, Sohee; Kim, Ho Jun; Lee, Kang Mi; Kim, Ki Hun; Kwon, Oh-Seung; Lee, Jaeick

    2015-01-01

    This study compared the sensitivity of various separation and ionization methods, including gas chromatography with an electron ionization source (GC-EI), liquid chromatography with an electrospray ionization source (LC-ESI), and liquid chromatography with a silver ion coordination ion spray source (LC-Ag(+) CIS), coupled to a mass spectrometer (MS) for steroid analysis. Chromatographic conditions, mass spectrometric transitions, and ion source parameters were optimized. The majority of steroids in GC-EI/MS/MS and LC-Ag(+) CIS/MS/MS analysis showed higher sensitivities than those obtained with other analytical methods. The limits of detection (LODs) of 65 steroids by GC-EI/MS/MS, 68 steroids by LC-Ag(+) CIS/MS/MS, 56 steroids by GC-EI/MS, 54 steroids by LC-ESI/MS/MS, and 27 steroids by GC-ESI/MS/MS were below cut-off value of 2.0 ng/mL. LODs of steroids that formed protonated ions in LC-ESI/MS/MS analysis were all lower than the cut-off value. Several steroids such as unconjugated C3-hydroxyl with C17-hydroxyl structure showed higher sensitivities in GC-EI/MS/MS analysis relative to those obtained using the LC-based methods. The steroids containing 4, 9, 11-triene structures showed relatively poor sensitivities in GC-EI/MS and GC-ESI/MS/MS analysis. The results of this study provide information that may be useful for selecting suitable analytical methods for confirmatory analysis of steroids. Copyright © 2015 John Wiley & Sons, Ltd.

  9. A fluorescent graphitic carbon nitride nanosheet biosensor for highly sensitive, label-free detection of alkaline phosphatase.

    PubMed

    Xiang, Mei-Hao; Liu, Jin-Wen; Li, Na; Tang, Hao; Yu, Ru-Qin; Jiang, Jian-Hui

    2016-02-28

    Graphitic C3N4 (g-C3N4) nanosheets provide an attractive option for bioprobes and bioimaging applications. Utilizing highly fluorescent and water-dispersible ultrathin g-C3N4 nanosheets, a highly sensitive, selective and label-free biosensor has been developed for ALP detection for the first time. The developed approach utilizes a natural substrate of ALP in biological systems and thus affords very high catalytic efficiency. This novel biosensor is demonstrated to enable quantitative analysis of ALP in a wide range from 0.1 to 1000 U L(-1) with a low detection limit of 0.08 U L(-1), which is among the most sensitive assays for ALP. It is expected that the developed method may provide a low-cost, convenient, rapid and highly sensitive platform for ALP-based clinical diagnostics and biomedical applications.

  10. "A Bayesian sensitivity analysis to evaluate the impact of unmeasured confounding with external data: a real world comparative effectiveness study in osteoporosis".

    PubMed

    Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W

    2016-09-01

    Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Atmospheric Environment Vulnerability Cause Analysis for the Beijing-Tianjin-Hebei Metropolitan Region

    PubMed Central

    Zhang, Yang; Shen, Jing; Li, Yu

    2018-01-01

    Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management. PMID:29342852

  12. Atmospheric Environment Vulnerability Cause Analysis for the Beijing-Tianjin-Hebei Metropolitan Region.

    PubMed

    Zhang, Yang; Shen, Jing; Li, Yu

    2018-01-13

    Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management.

  13. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  14. SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perfetti, Christopher M; Rearden, Bradley T

    2015-01-01

    The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less

  15. Ultrasonography for endoleak detection after endoluminal abdominal aortic aneurysm repair.

    PubMed

    Abraha, Iosief; Luchetta, Maria Laura; De Florio, Rita; Cozzolino, Francesco; Casazza, Giovanni; Duca, Piergiorgio; Parente, Basso; Orso, Massimiliano; Germani, Antonella; Eusebi, Paolo; Montedori, Alessandro

    2017-06-09

    People with abdominal aortic aneurysm who receive endovascular aneurysm repair (EVAR) need lifetime surveillance to detect potential endoleaks. Endoleak is defined as persistent blood flow within the aneurysm sac following EVAR. Computed tomography (CT) angiography is considered the reference standard for endoleak surveillance. Colour duplex ultrasound (CDUS) and contrast-enhanced CDUS (CE-CDUS) are less invasive but considered less accurate than CT. To determine the diagnostic accuracy of colour duplex ultrasound (CDUS) and contrast-enhanced-colour duplex ultrasound (CE-CDUS) in terms of sensitivity and specificity for endoleak detection after endoluminal abdominal aortic aneurysm repair (EVAR). We searched MEDLINE, Embase, LILACS, ISI Conference Proceedings, Zetoc, and trial registries in June 2016 without language restrictions and without use of filters to maximize sensitivity. Any cross-sectional diagnostic study evaluating participants who received EVAR by both ultrasound (with or without contrast) and CT scan assessed at regular intervals. Two pairs of review authors independently extracted data and assessed quality of included studies using the QUADAS 1 tool. A third review author resolved discrepancies. The unit of analysis was number of participants for the primary analysis and number of scans performed for the secondary analysis. We carried out a meta-analysis to estimate sensitivity and specificity of CDUS or CE-CDUS using a bivariate model. We analysed each index test separately. As potential sources of heterogeneity, we explored year of publication, characteristics of included participants (age and gender), direction of the study (retrospective, prospective), country of origin, number of CDUS operators, and ultrasound manufacturer. We identified 42 primary studies with 4220 participants. Twenty studies provided accuracy data based on the number of individual participants (seven of which provided data with and without the use of contrast). Sixteen of these studies evaluated the accuracy of CDUS. These studies were generally of moderate to low quality: only three studies fulfilled all the QUADAS items; in six (40%) of the studies, the delay between the tests was unclear or longer than four weeks; in eight (50%), the blinding of either the index test or the reference standard was not clearly reported or was not performed; and in two studies (12%), the interpretation of the reference standard was not clearly reported. Eleven studies evaluated the accuracy of CE-CDUS. These studies were of better quality than the CDUS studies: five (45%) studies fulfilled all the QUADAS items; four (36%) did not report clearly the blinding interpretation of the reference standard; and two (18%) did not clearly report the delay between the two tests.Based on the bivariate model, the summary estimates for CDUS were 0.82 (95% confidence interval (CI) 0.66 to 0.91) for sensitivity and 0.93 (95% CI 0.87 to 0.96) for specificity whereas for CE-CDUS the estimates were 0.94 (95% CI 0.85 to 0.98) for sensitivity and 0.95 (95% CI 0.90 to 0.98) for specificity. Regression analysis showed that CE-CDUS was superior to CDUS in terms of sensitivity (LR Chi 2 = 5.08, 1 degree of freedom (df); P = 0.0242 for model improvement).Seven studies provided estimates before and after administration of contrast. Sensitivity before contrast was 0.67 (95% CI 0.47 to 0.83) and after contrast was 0.97 (95% CI 0.92 to 0.99). The improvement in sensitivity with of contrast use was statistically significant (LR Chi 2 = 13.47, 1 df; P = 0.0002 for model improvement).Regression testing showed evidence of statistically significant effect bias related to year of publication and study quality within individual participants based CDUS studies. Sensitivity estimates were higher in the studies published before 2006 than the estimates obtained from studies published in 2006 or later (P < 0.001); and studies judged as low/unclear quality provided higher estimates in sensitivity. When regression testing was applied to the individual based CE-CDUS studies, none of the items, namely direction of the study design, quality, and age, were identified as a source of heterogeneity.Twenty-two studies provided accuracy data based on number of scans performed (of which four provided data with and without the use of contrast). Analysis of the studies that provided scan based data showed similar results. Summary estimates for CDUS (18 studies) showed 0.72 (95% CI 0.55 to 0.85) for sensitivity and 0.95 (95% CI 0.90 to 0.96) for specificity whereas summary estimates for CE-CDUS (eight studies) were 0.91 (95% CI 0.68 to 0.98) for sensitivity and 0.89 (95% CI 0.71 to 0.96) for specificity. This review demonstrates that both ultrasound modalities (with or without contrast) showed high specificity. For ruling in endoleaks, CE-CDUS appears superior to CDUS. In an endoleak surveillance programme CE-CDUS can be introduced as a routine diagnostic modality followed by CT scan only when the ultrasound is positive to establish the type of endoleak and the subsequent therapeutic management.

  16. 7 CFR 4279.202 - Compliance with §§ 4279.1 through 4279.84.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Energy production; and (F) Financial and sensitivity review using a banking industry software analysis... availability of the resources necessary to provide those products and services; and pro forma financial... membership is composed of farm cooperatives. Feasibility study. An analysis by an independent qualified...

  17. 7 CFR 4279.202 - Compliance with §§ 4279.1 through 4279.84.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ...) Energy production; and (F) Financial and sensitivity review using a banking industry software analysis... availability of the resources necessary to provide those products and services; and pro forma financial... membership is composed of farm cooperatives. Feasibility study. An analysis by an independent qualified...

  18. 7 CFR 4279.202 - Compliance with §§ 4279.1 through 4279.84.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...) Energy production; and (F) Financial and sensitivity review using a banking industry software analysis... availability of the resources necessary to provide those products and services; and pro forma financial... membership is composed of farm cooperatives. Feasibility study. An analysis by an independent qualified...

  19. To what degree does the missing-data technique influence the estimated growth in learning strategies over time? A tutorial example of sensitivity analysis for longitudinal data.

    PubMed

    Coertjens, Liesje; Donche, Vincent; De Maeyer, Sven; Vanthournout, Gert; Van Petegem, Peter

    2017-01-01

    Longitudinal data is almost always burdened with missing data. However, in educational and psychological research, there is a large discrepancy between methodological suggestions and research practice. The former suggests applying sensitivity analysis in order to the robustness of the results in terms of varying assumptions regarding the mechanism generating the missing data. However, in research practice, participants with missing data are usually discarded by relying on listwise deletion. To help bridge the gap between methodological recommendations and applied research in the educational and psychological domain, this study provides a tutorial example of sensitivity analysis for latent growth analysis. The example data concern students' changes in learning strategies during higher education. One cohort of students in a Belgian university college was asked to complete the Inventory of Learning Styles-Short Version, in three measurement waves. A substantial number of students did not participate on each occasion. Change over time in student learning strategies was assessed using eight missing data techniques, which assume different mechanisms for missingness. The results indicated that, for some learning strategy subscales, growth estimates differed between the models. Guidelines in terms of reporting the results from sensitivity analysis are synthesised and applied to the results from the tutorial example.

  20. Stability and sensitivity of ABR flow control protocols

    NASA Astrophysics Data System (ADS)

    Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong

    1998-10-01

    This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.

  1. Sensitivity Analysis of Dispersion Model Results in the NEXUS Health Study Due to Uncertainties in Traffic-Related Emissions Inputs

    EPA Science Inventory

    Dispersion modeling tools have traditionally provided critical information for air quality management decisions, but have been used recently to provide exposure estimates to support health studies. However, these models can be challenging to implement, particularly in near-road s...

  2. Reinforcement Sensitivity and Social Anxiety in Combat Veterans

    PubMed Central

    Kimbrel, Nathan A.; Meyer, Eric C.; DeBeer, Bryann B.; Mitchell, John T.; Kimbrel, Azure D.; Nelson-Gray, Rosemery O.; Morissette, Sandra B.

    2017-01-01

    Objective The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Method Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. Results As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. Conclusion The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity. PMID:28966424

  3. Reinforcement Sensitivity and Social Anxiety in Combat Veterans.

    PubMed

    Kimbrel, Nathan A; Meyer, Eric C; DeBeer, Bryann B; Mitchell, John T; Kimbrel, Azure D; Nelson-Gray, Rosemery O; Morissette, Sandra B

    2016-08-01

    The present study tested the hypothesis that low behavioral approach system (BAS) sensitivity is associated with social anxiety in combat veterans. Self-report measures of reinforcement sensitivity, combat exposure, social interaction anxiety, and social observation anxiety were administered to 197 Iraq/Afghanistan combat veterans. As expected, combat exposure, behavioral inhibition system (BIS) sensitivity, and fight-flight-freeze system (FFFS) sensitivity were positively associated with both social interaction anxiety and social observation anxiety. In contrast, BAS sensitivity was negatively associated with social interaction anxiety only. An analysis of the BAS subscales revealed that the Reward Responsiveness subscale was the only BAS subscale associated with social interaction anxiety. BAS-Reward Responsiveness was also associated with social observation anxiety. The findings from the present research provide further evidence that low BAS sensitivity may be associated with social anxiety over and above the effects of BIS and FFFS sensitivity.

  4. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  5. Simultaneous Solid Phase Extraction and Derivatization of Aliphatic Primary Amines Prior to Separation and UV-Absorbance Detection

    PubMed Central

    Felhofer, Jessica L.; Scida, Karen; Penick, Mark; Willis, Peter A.; Garcia, Carlos D.

    2013-01-01

    To overcome the problem of poor sensitivity of capillary electrophoresis-UV absorbance for the detection of aliphatic amines, a solid phase extraction and derivatization scheme was developed. This work demonstrates successful coupling of amines to a chromophore immobilized on a solid phase and subsequent cleavage and analysis. Although the analysis of many types of amines is relevant for myriad applications, this paper focuses on the derivatization and separation of amines with environmental relevance. This work aims to provide the foundations for future developments of an integrated sample preparation microreactor capable of performing simultaneous derivatization, preconcentration, and sample cleanup for sensitive analysis of primary amines. PMID:24054648

  6. Sampling and analyte enrichment strategies for ambient mass spectrometry.

    PubMed

    Li, Xianjiang; Ma, Wen; Li, Hongmei; Ai, Wanpeng; Bai, Yu; Liu, Huwei

    2018-01-01

    Ambient mass spectrometry provides great convenience for fast screening, and has showed promising potential in analytical chemistry. However, its relatively low sensitivity seriously restricts its practical utility in trace compound analysis. In this review, we summarize the sampling and analyte enrichment strategies coupled with nine modes of representative ambient mass spectrometry (desorption electrospray ionization, paper vhspray ionization, wooden-tip spray ionization, probe electrospray ionization, coated blade spray ionization, direct analysis in real time, desorption corona beam ionization, dielectric barrier discharge ionization, and atmospheric-pressure solids analysis probe) that have dramatically increased the detection sensitivity. We believe that these advances will promote routine use of ambient mass spectrometry. Graphical abstract Scheme of sampling stretagies for ambient mass spectrometry.

  7. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.

  8. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  9. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  10. Elemental Analysis in Biological Matrices Using ICP-MS.

    PubMed

    Hansen, Matthew N; Clogston, Jeffrey D

    2018-01-01

    The increasing exploration of metallic nanoparticles for use as cancer therapeutic agents necessitates a sensitive technique to track the clearance and distribution of the material once introduced into a living system. Inductively coupled plasma mass spectrometry (ICP-MS) provides a sensitive and selective tool for tracking the distribution of metal components from these nanotherapeutics. This chapter presents a standardized method for processing biological matrices, ensuring complete homogenization of tissues, and outlines the preparation of appropriate standards and controls. The method described herein utilized gold nanoparticle-treated samples; however, the method can easily be applied to the analysis of other metals.

  11. Sobol Sensitivity Analysis: A Tool to Guide the Development and Evaluation of Systems Pharmacology Models

    PubMed Central

    Trame, MN; Lesko, LJ

    2015-01-01

    A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289

  12. Sensitivity of surface meteorological analyses to observation networks

    NASA Astrophysics Data System (ADS)

    Tyndall, Daniel Paul

    A computationally efficient variational analysis system for two-dimensional meteorological fields is developed and described. This analysis approach is most efficient when the number of analysis grid points is much larger than the number of available observations, such as for large domain mesoscale analyses. The analysis system is developed using MATLAB software and can take advantage of multiple processors or processor cores. A version of the analysis system has been exported as a platform independent application (i.e., can be run on Windows, Linux, or Macintosh OS X desktop computers without a MATLAB license) with input/output operations handled by commonly available internet software combined with data archives at the University of Utah. The impact of observation networks on the meteorological analyses is assessed by utilizing a percentile ranking of individual observation sensitivity and impact, which is computed by using the adjoint of the variational surface assimilation system. This methodology is demonstrated using a case study of the analysis from 1400 UTC 27 October 2010 over the entire contiguous United States domain. The sensitivity of this approach to the dependence of the background error covariance on observation density is examined. Observation sensitivity and impact provide insight on the influence of observations from heterogeneous observing networks as well as serve as objective metrics for quality control procedures that may help to identify stations with significant siting, reporting, or representativeness issues.

  13. Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool

    NASA Astrophysics Data System (ADS)

    Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.

    2018-06-01

    Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.

  14. Recent Results from the 2015 flight of Spider

    NASA Astrophysics Data System (ADS)

    Jones, William C.

    2016-06-01

    Spider is a balloon borne mm-wave polarimeter designed to provide high fidelity measurements of the large scale polarization of the microwave sky. Spider flew a 17 day mission in January 2015, mapping roughly 10% of the full sky (4500 square degrees) in the southern Galactic hemisphere at each of 94 and 150 GHz. Spider achieved an instrumental sensitivity of 4 μ K_{CMB}√{s}, providing maps that exceed the sensitivity of the Planck data. We discuss these data, the current status of our science analysis, and our understanding of the Galacticforeground emission in this high latitude region.

  15. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  16. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  17. Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study

    PubMed Central

    Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley

    2011-01-01

    Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400

  18. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  19. Shape optimization of three-dimensional stamped and solid automotive components

    NASA Technical Reports Server (NTRS)

    Botkin, M. E.; Yang, R.-J.; Bennett, J. A.

    1987-01-01

    The shape optimization of realistic, 3-D automotive components is discussed. The integration of the major parts of the total process: modeling, mesh generation, finite element and sensitivity analysis, and optimization are stressed. Stamped components and solid components are treated separately. For stamped parts a highly automated capability was developed. The problem description is based upon a parameterized boundary design element concept for the definition of the geometry. Automatic triangulation and adaptive mesh refinement are used to provide an automated analysis capability which requires only boundary data and takes into account sensitivity of the solution accuracy to boundary shape. For solid components a general extension of the 2-D boundary design element concept has not been achieved. In this case, the parameterized surface shape is provided using a generic modeling concept based upon isoparametric mapping patches which also serves as the mesh generator. Emphasis is placed upon the coupling of optimization with a commercially available finite element program. To do this it is necessary to modularize the program architecture and obtain shape design sensitivities using the material derivative approach so that only boundary solution data is needed.

  20. Empirically Derived and Simulated Sensitivity of Vegetation to Climate Across Global Gradients of Temperature and Precipitation

    NASA Astrophysics Data System (ADS)

    Quetin, G. R.; Swann, A. L. S.

    2017-12-01

    Successfully predicting the state of vegetation in a novel environment is dependent on our process level understanding of the ecosystem and its interactions with the environment. We derive a global empirical map of the sensitivity of vegetation to climate using the response of satellite-observed greenness and leaf area to interannual variations in temperature and precipitation. Our analysis provides observations of ecosystem functioning; the vegetation interactions with the physical environment, across a wide range of climates and provide a functional constraint for hypotheses engendered in process-based models. We infer mechanisms constraining ecosystem functioning by contrasting how the observed and simulated sensitivity of vegetation to climate varies across climate space. Our analysis yields empirical evidence for multiple physical and biological mediators of the sensitivity of vegetation to climate as a systematic change across climate space. Our comparison of remote sensing-based vegetation sensitivity with modeled estimates provides evidence for which physiological mechanisms - photosynthetic efficiency, respiration, water supply, atmospheric water demand, and sunlight availability - dominate the ecosystem functioning in places with different climates. Earth system models are generally successful in reproducing the broad sign and shape of ecosystem functioning across climate space. However, this general agreement breaks down in hot wet climates where models simulate less leaf area during a warmer year, while observations show a mixed response but overall more leaf area during warmer years. In addition, simulated ecosystem interaction with temperature is generally larger and changes more rapidly across a gradient of temperature than is observed. We hypothesize that the amplified interaction and change are both due to a lack of adaptation and acclimation in simulations. This discrepancy with observations suggests that simulated responses of vegetation to global warming, and feedbacks between vegetation and climate, are too strong in the models.

  1. Accelerated Monte Carlo Simulation for Safety Analysis of the Advanced Airspace Concept

    NASA Technical Reports Server (NTRS)

    Thipphavong, David

    2010-01-01

    Safe separation of aircraft is a primary objective of any air traffic control system. An accelerated Monte Carlo approach was developed to assess the level of safety provided by a proposed next-generation air traffic control system. It combines features of fault tree and standard Monte Carlo methods. It runs more than one order of magnitude faster than the standard Monte Carlo method while providing risk estimates that only differ by about 10%. It also preserves component-level model fidelity that is difficult to maintain using the standard fault tree method. This balance of speed and fidelity allows sensitivity analysis to be completed in days instead of weeks or months with the standard Monte Carlo method. Results indicate that risk estimates are sensitive to transponder, pilot visual avoidance, and conflict detection failure probabilities.

  2. System and method for high precision isotope ratio destructive analysis

    DOEpatents

    Bushaw, Bruce A; Anheier, Norman C; Phillips, Jon R

    2013-07-02

    A system and process are disclosed that provide high accuracy and high precision destructive analysis measurements for isotope ratio determination of relative isotope abundance distributions in liquids, solids, and particulate samples. The invention utilizes a collinear probe beam to interrogate a laser ablated plume. This invention provides enhanced single-shot detection sensitivity approaching the femtogram range, and isotope ratios that can be determined at approximately 1% or better precision and accuracy (relative standard deviation).

  3. Multireaction equilibrium geothermometry: A sensitivity analysis using data from the Lower Geyser Basin, Yellowstone National Park, USA

    USGS Publications Warehouse

    King, Jonathan M.; Hurwitz, Shaul; Lowenstern, Jacob B.; Nordstrom, D. Kirk; McCleskey, R. Blaine

    2016-01-01

    A multireaction chemical equilibria geothermometry (MEG) model applicable to high-temperature geothermal systems has been developed over the past three decades. Given sufficient data, this model provides more constraint on calculated reservoir temperatures than classical chemical geothermometers that are based on either the concentration of silica (SiO2), or the ratios of cation concentrations. A set of 23 chemical analyses from Ojo Caliente Spring and 22 analyses from other thermal features in the Lower Geyser Basin of Yellowstone National Park are used to examine the sensitivity of calculated reservoir temperatures using the GeoT MEG code (Spycher et al. 2013, 2014) to quantify the effects of solute concentrations, degassing, and mineral assemblages on calculated reservoir temperatures. Results of our analysis demonstrate that the MEG model can resolve reservoir temperatures within approximately ±15°C, and that natural variation in fluid compositions represents a greater source of variance in calculated reservoir temperatures than variations caused by analytical uncertainty (assuming ~5% for major elements). The analysis also suggests that MEG calculations are particularly sensitive to variations in silica concentration, the concentrations of the redox species Fe(II) and H2S, and that the parameters defining steam separation and CO2 degassing from the liquid may be adequately determined by numerical optimization. Results from this study can provide guidance for future applications of MEG models, and thus provide more reliable information on geothermal energy resources during exploration.

  4. Resting spontaneous baroreflex sensitivity and cardiac autonomic control in anabolic androgenic steroid users

    PubMed Central

    dos Santos, Marcelo R.; Sayegh, Ana L.C.; Armani, Rafael; Costa-Hong, Valéria; de Souza, Francis R.; Toschi-Dias, Edgar; Bortolotto, Luiz A.; Yonamine, Mauricio; Negrão, Carlos E.; Alves, Maria-Janieire N.N.

    2018-01-01

    OBJECTIVES: Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. METHODS: Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. RESULTS: Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. CONCLUSIONS: Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population. PMID:29791601

  5. Resting spontaneous baroreflex sensitivity and cardiac autonomic control in anabolic androgenic steroid users.

    PubMed

    Santos, Marcelo R Dos; Sayegh, Ana L C; Armani, Rafael; Costa-Hong, Valéria; Souza, Francis R de; Toschi-Dias, Edgar; Bortolotto, Luiz A; Yonamine, Mauricio; Negrão, Carlos E; Alves, Maria-Janieire N N

    2018-05-21

    Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population.

  6. LLNA variability: An essential ingredient for a comprehensive assessment of non-animal skin sensitization test methods and strategies.

    PubMed

    Hoffmann, Sebastian

    2015-01-01

    The development of non-animal skin sensitization test methods and strategies is quickly progressing. Either individually or in combination, the predictive capacity is usually described in comparison to local lymph node assay (LLNA) results. In this process the important lesson from other endpoints, such as skin or eye irritation, to account for variability reference test results - here the LLNA - has not yet been fully acknowledged. In order to provide assessors as well as method and strategy developers with appropriate estimates, we investigated the variability of EC3 values from repeated substance testing using the publicly available NICEATM (NTP Interagency Center for the Evaluation of Alternative Toxicological Methods) LLNA database. Repeat experiments for more than 60 substances were analyzed - once taking the vehicle into account and once combining data over all vehicles. In general, variability was higher when different vehicles were used. In terms of skin sensitization potential, i.e., discriminating sensitizer from non-sensitizers, the false positive rate ranged from 14-20%, while the false negative rate was 4-5%. In terms of skin sensitization potency, the rate to assign a substance to the next higher or next lower potency class was approx.10-15%. In addition, general estimates for EC3 variability are provided that can be used for modelling purposes. With our analysis we stress the importance of considering the LLNA variability in the assessment of skin sensitization test methods and strategies and provide estimates thereof.

  7. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  8. Monte Carlo Modeling-Based Digital Loop-Mediated Isothermal Amplification on a Spiral Chip for Absolute Quantification of Nucleic Acids.

    PubMed

    Xia, Yun; Yan, Shuangqian; Zhang, Xian; Ma, Peng; Du, Wei; Feng, Xiaojun; Liu, Bi-Feng

    2017-03-21

    Digital loop-mediated isothermal amplification (dLAMP) is an attractive approach for absolute quantification of nucleic acids with high sensitivity and selectivity. Theoretical and numerical analysis of dLAMP provides necessary guidance for the design and analysis of dLAMP devices. In this work, a mathematical model was proposed on the basis of the Monte Carlo method and the theories of Poisson statistics and chemometrics. To examine the established model, we fabricated a spiral chip with 1200 uniform and discrete reaction chambers (9.6 nL) for absolute quantification of pathogenic DNA samples by dLAMP. Under the optimized conditions, dLAMP analysis on the spiral chip realized quantification of nucleic acids spanning over 4 orders of magnitude in concentration with sensitivity as low as 8.7 × 10 -2 copies/μL in 40 min. The experimental results were consistent with the proposed mathematical model, which could provide useful guideline for future development of dLAMP devices.

  9. The Sensitivity of a Global Ocean Model to Wind Forcing: A Test Using Sea Level and Wind Observations from Satellites and Operational Analysis

    NASA Technical Reports Server (NTRS)

    Fu, L. L.; Chao, Y.

    1997-01-01

    Investigated in this study is the response of a global ocean general circulation model to forcing provided by two wind products: operational analysis from the National Center for Environmental Prediction (NCEP); observations made by the ERS-1 radar scatterometer.

  10. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  11. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  12. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  13. Theoretical foundations for finite-time transient stability and sensitivity analysis of power systems

    NASA Astrophysics Data System (ADS)

    Dasgupta, Sambarta

    Transient stability and sensitivity analysis of power systems are problems of enormous academic and practical interest. These classical problems have received renewed interest, because of the advancement in sensor technology in the form of phasor measurement units (PMUs). The advancement in sensor technology has provided unique opportunity for the development of real-time stability monitoring and sensitivity analysis tools. Transient stability problem in power system is inherently a problem of stability analysis of the non-equilibrium dynamics, because for a short time period following a fault or disturbance the system trajectory moves away from the equilibrium point. The real-time stability decision has to be made over this short time period. However, the existing stability definitions and hence analysis tools for transient stability are asymptotic in nature. In this thesis, we discover theoretical foundations for the short-term transient stability analysis of power systems, based on the theory of normally hyperbolic invariant manifolds and finite time Lyapunov exponents, adopted from geometric theory of dynamical systems. The theory of normally hyperbolic surfaces allows us to characterize the rate of expansion and contraction of co-dimension one material surfaces in the phase space. The expansion and contraction rates of these material surfaces can be computed in finite time. We prove that the expansion and contraction rates can be used as finite time transient stability certificates. Furthermore, material surfaces with maximum expansion and contraction rate are identified with the stability boundaries. These stability boundaries are used for computation of stability margin. We have used the theoretical framework for the development of model-based and model-free real-time stability monitoring methods. Both the model-based and model-free approaches rely on the availability of high resolution time series data from the PMUs for stability prediction. The problem of sensitivity analysis of power system, subjected to changes or uncertainty in load parameters and network topology, is also studied using the theory of normally hyperbolic manifolds. The sensitivity analysis is used for the identification and rank ordering of the critical interactions and parameters in the power network. The sensitivity analysis is carried out both in finite time and in asymptotic. One of the distinguishing features of the asymptotic sensitivity analysis is that the asymptotic dynamics of the system is assumed to be a periodic orbit. For asymptotic sensitivity analysis we employ combination of tools from ergodic theory and geometric theory of dynamical systems.

  14. msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.

    PubMed

    Pérez-Figueroa, A

    2013-05-01

    In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.

  15. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  16. Comparison of the sensitivity of mass spectrometry atmospheric pressure ionization techniques in the analysis of porphyrinoids.

    PubMed

    Swider, Paweł; Lewtak, Jan P; Gryko, Daniel T; Danikiewicz, Witold

    2013-10-01

    The porphyrinoids chemistry is greatly dependent on the data obtained in mass spectrometry. For this reason, it is essential to determine the range of applicability of mass spectrometry ionization methods. In this study, the sensitivity of three different atmospheric pressure ionization techniques, electrospray ionization, atmospheric pressure chemical ionization and atmospheric pressure photoionization, was tested for several porphyrinods and their metallocomplexes. Electrospray ionization method was shown to be the best ionization technique because of its high sensitivity for derivatives of cyanocobalamin, free-base corroles and porphyrins. In the case of metallocorroles and metalloporphyrins, atmospheric pressure photoionization with dopant proved to be the most sensitive ionization method. It was also shown that for relatively acidic compounds, particularly for corroles, the negative ion mode provides better sensitivity than the positive ion mode. The results supply a lot of relevant information on the methodology of porphyrinoids analysis carried out by mass spectrometry. The information can be useful in designing future MS or liquid chromatography-MS experiments. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Estimating costs in the economic evaluation of medical technologies.

    PubMed

    Luce, B R; Elixhauser, A

    1990-01-01

    The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.

  18. EBUS-Guided Cautery-Assisted Transbronchial Forceps Biopsies: Safety and Sensitivity Relative to Transbronchial Needle Aspiration

    PubMed Central

    Bramley, Kyle; Pisani, Margaret A.; Murphy, Terrence E.; Araujo, Katy; Homer, Robert; Puchalski, Jonathan

    2016-01-01

    Background EBUS-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, when larger “core” biopsy samples of malignant tissue are required, TBNA may not suffice. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsies (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. Methods Fifty unselected patients undergoing convex probe EBUS were prospectively enrolled. Under EBUS guidance, all lymph nodes ≥ 1 cm were sequentially biopsied using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported on a per-patient basis. Results There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). For analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis was based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis was based only on TBNA samples. In some cases only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. Conclusions The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided specimens for clinical trials of malignancy when needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. PMID:26912301

  19. Endobronchial Ultrasound-Guided Cautery-Assisted Transbronchial Forceps Biopsies: Safety and Sensitivity Relative to Transbronchial Needle Aspiration.

    PubMed

    Bramley, Kyle; Pisani, Margaret A; Murphy, Terrence E; Araujo, Katy L; Homer, Robert J; Puchalski, Jonathan T

    2016-05-01

    Endobronchial ultrasound (EBUS)-guided transbronchial needle aspiration (TBNA) is important in the evaluation of thoracic lymphadenopathy. Reliably providing excellent diagnostic yield for malignancy, its diagnosis of sarcoidosis is inconsistent. Furthermore, TBNA may not suffice when larger "core biopsy" samples of malignant tissue are required. The primary objective of this study was to determine if the sequential use of TBNA and a novel technique called cautery-assisted transbronchial forceps biopsy (ca-TBFB) was safe. Secondary outcomes included sensitivity and successful acquisition of tissue. The study prospectively enrolled 50 unselected patients undergoing convex-probe EBUS. All lymph nodes exceeding 1 cm were sequentially biopsied under EBUS guidance using TBNA and ca-TBFB. Safety and sensitivity were assessed at the nodal level for 111 nodes. Results of each technique were also reported for each patient. There were no significant adverse events. In nodes determined to be malignant, TBNA provided higher sensitivity (100%) than ca-TBFB (78%). However, among nodes with granulomatous inflammation, ca-TBFB exhibited higher sensitivity (90%) than TBNA (33%). On the one hand, for analysis based on patients rather than nodes, 6 of the 31 patients with malignancy would have been missed or understaged if the diagnosis were based on samples obtained by ca-TBFB. On the other hand, 3 of 8 patients with sarcoidosis would have been missed if analysis were based only on TBNA samples. In some patients, only ca-TBFB acquired sufficient tissue for the core samples needed in clinical trials of malignancy. The sequential use of TBNA and ca-TBFB appears to be safe. The larger samples obtained from ca-TBFB increased its sensitivity to detect granulomatous disease and provided adequate specimens for clinical trials of malignancy when specimens from needle biopsies were insufficient. For thoracic surgeons and advanced bronchoscopists, we advocate ca-TBFB as an alternative to TBNA in select clinical scenarios. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Culture-sensitive counselling, psychotherapy and support groups in the Orthodox-Jewish community: how they work and how they are experienced.

    PubMed

    Loewenthal, Kate Miriam; Rogers, Marian Brooke

    2004-09-01

    There is political and scientific goodwill towards the provision of culture-sensitive support, but as yet little knowledge about how such support works and what are its strengths and difficulties in practice. To study groups offering culture-sensitive psychological and other support to the strictly orthodox Jewish community in London. Semi-structured interviews with service providers, potential and actual users from the community, and professionals serving the community. Interviews asked about the aims, functioning and achievements of 10 support groups. Thematic analysis identified seven important themes: admiration for the work of the groups; appreciation of the benefits of culture-sensitive services; concerns over confidentiality and stigma; concerns over finance and fund-raising; concerns about professionalism; the importance of liaison with rabbinic authorities; need for better dissemination of information. The strengths and difficulties of providing culture-sensitive services in one community were identified. Areas for attention include vigilance regarding confidentiality, improvements in disseminating information, improvements in the reliability of funding and attention to systematic needs assessment, and to the examination of efficacy of these forms of service provision.

  1. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  2. Pleural ultrasonography versus chest radiography for the diagnosis of pneumothorax: review of the literature and meta-analysis.

    PubMed

    Alrajab, Saadah; Youssef, Asser M; Akkus, Nuri I; Caldito, Gloria

    2013-09-23

    Ultrasonography is being increasingly utilized in acute care settings with expanding applications. Pneumothorax evaluation by ultrasonography is a fast, safe, easy and inexpensive alternative to chest radiographs. In this review, we provide a comprehensive analysis of the current literature comparing ultrasonography and chest radiography for the diagnosis of pneumothorax. We searched English-language articles in MEDLINE, EMBASE and Cochrane Library dealing with both ultrasonography and chest radiography for diagnosis of pneumothorax. In eligible studies that met strict inclusion criteria, we conducted a meta-analysis to evaluate the diagnostic accuracy of pleural ultrasonography in comparison with chest radiography for the diagnosis of pneumothorax. We reviewed 601 articles and selected 25 original research articles for detailed review. Only 13 articles met all of our inclusion criteria and were included in the final analysis. One study used lung sliding sign alone, 12 studies used lung sliding and comet tail signs, and 6 studies searched for lung point in addition to the other two signs. Ultrasonography had a pooled sensitivity of 78.6% (95% CI, 68.1 to 98.1) and a specificity of 98.4% (95% CI, 97.3 to 99.5). Chest radiography had a pooled sensitivity of 39.8% (95% CI, 29.4 to 50.3) and a specificity of 99.3% (95% CI, 98.4 to 100). Our meta-regression and subgroup analyses indicate that consecutive sampling of patients compared to convenience sampling provided higher sensitivity results for both ultrasonography and chest radiography. Consecutive versus nonconsecutive sampling and trauma versus nontrauma settings were significant sources of heterogeneity. In addition, subgroup analysis showed significant variations related to operator and type of probe used. Our study indicates that ultrasonography is more accurate than chest radiography for detection of pneumothorax. The results support the previous investigations in this field, add new valuable information obtained from subgroup analysis, and provide accurate estimates for the performance parameters of both bedside ultrasonography and chest radiography for pneumothorax evaluation.

  3. Mitochondrial sequence analysis for forensic identification using pyrosequencing technology.

    PubMed

    Andréasson, H; Asp, A; Alderborn, A; Gyllensten, U; Allen, M

    2002-01-01

    Over recent years, requests for mtDNA analysis in the field of forensic medicine have notably increased, and the results of such analyses have proved to be very useful in forensic cases where nuclear DNA analysis cannot be performed. Traditionally, mtDNA has been analyzed by DNA sequencing of the two hypervariable regions, HVI and HVII, in the D-loop. DNA sequence analysis using the conventional Sanger sequencing is very robust but time consuming and labor intensive. By contrast, mtDNA analysis based on the pyrosequencing technology provides fast and accurate results from the human mtDNA present in many types of evidence materials in forensic casework. The assay has been developed to determine polymorphic sites in the mitochondrial D-loop as well as the coding region to further increase the discrimination power of mtDNA analysis. The pyrosequencing technology for analysis of mtDNA polymorphisms has been tested with regard to sensitivity, reproducibility, and success rate when applied to control samples and actual casework materials. The results show that the method is very accurate and sensitive; the results are easily interpreted and provide a high success rate on casework samples. The panel of pyrosequencing reactions for the mtDNA polymorphisms were chosen to result in an optimal discrimination power in relation to the number of bases determined.

  4. A Versatile PDMS/Paper Hybrid Microfluidic Platform for Sensitive Infectious Disease Diagnosis

    PubMed Central

    2015-01-01

    Bacterial meningitis is a serious health concern worldwide. Given that meningitis can be fatal and many meningitis cases occurred in high-poverty areas, a simple, low-cost, highly sensitive method is in great need for immediate and early diagnosis of meningitis. Herein, we report a versatile and cost-effective polydimethylsiloxane (PDMS)/paper hybrid microfluidic device integrated with loop-mediated isothermal amplification (LAMP) for the rapid, sensitive, and instrument-free detection of the main meningitis-causing bacteria, Neisseria meningitidis (N. meningitidis). The introduction of paper into the microfluidic device for LAMP reactions enables stable test results over a much longer period of time than a paper-free microfluidic system. This hybrid system also offers versatile functions, by providing not only on-site qualitative diagnostic analysis (i.e., a yes or no answer), but also confirmatory testing and quantitative analysis in laboratory settings. The limit of detection of N. meningitidis is about 3 copies per LAMP zone within 45 min, close to single-bacterium detection sensitivity. In addition, we have achieved simple pathogenic microorganism detection without a laborious sample preparation process and without the use of centrifuges. This low-cost hybrid microfluidic system provides a simple and highly sensitive approach for fast instrument-free diagnosis of N. meningitidis in resource-limited settings. This versatile PDMS/paper microfluidic platform has great potential for the point of care (POC) diagnosis of a wide range of infectious diseases, especially for developing nations. PMID:25019330

  5. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  6. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    PubMed Central

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  7. Recent advances in chemiluminescence detection coupled with capillary electrophoresis and microchip capillary electrophoresis.

    PubMed

    Liu, Yuxuan; Huang, Xiangyi; Ren, Jicun

    2016-01-01

    CE is an ideal analytical method for extremely volume-limited biological microenvironments. However, the small injection volume makes it a challenge to achieve highly sensitive detection. Chemiluminescence (CL) detection is characterized by providing low background with excellent sensitivity because of requiring no light source. The coupling of CL with CE and MCE has become a powerful analytical method. So far, this method has been widely applied to chemical analysis, bioassay, drug analysis, and environment analysis. In this review, we first introduce some developments for CE-CL and MCE-CL systems, and then put the emphasis on the applications in the last 10 years. Finally, we discuss the future prospects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. A conceptualisation framework for building consensus on environmental sensitivity.

    PubMed

    González Del Campo, Ainhoa

    2017-09-15

    Examination of the intrinsic attributes of a system that render it more or less sensitive to potential stressors provides further insight into the baseline environment. In impact assessment, sensitivity of environmental receptors can be conceptualised on the basis of their: a) quality status according to statutory indicators and associated thresholds or targets; b) statutory protection; or c) inherent risk. Where none of these considerations are pertinent, subjective value judgments can be applied to determine sensitivity. This pragmatic conceptual framework formed the basis of a stakeholder consultation process for harmonising degrees of sensitivity of a number of environmental criteria. Harmonisation was sought to facilitate their comparative and combined analysis. Overall, full or wide agreement was reached on relative sensitivity values for the large majority of the reviewed criteria. Consensus was easier to reach on some themes (e.g. biodiversity, water and cultural heritage) than others (e.g. population and soils). As anticipated, existing statutory measures shaped the outcomes but, ultimately, knowledge-based values prevailed. The agreed relative sensitivities warrant extensive consultation but the conceptual framework provides a basis for increasing stakeholder consensus and objectivity of baseline assessments. This, in turn, can contribute to improving the evidence-base for characterising the significance of potential impacts. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  10. Perturbation analysis for patch occupancy dynamics

    USGS Publications Warehouse

    Martin, Julien; Nichols, James D.; McIntyre, Carol L.; Ferraz, Goncalo; Hines, James E.

    2009-01-01

    Perturbation analysis is a powerful tool to study population and community dynamics. This article describes expressions for sensitivity metrics reflecting changes in equilibrium occupancy resulting from small changes in the vital rates of patch occupancy dynamics (i.e., probabilities of local patch colonization and extinction). We illustrate our approach with a case study of occupancy dynamics of Golden Eagle (Aquila chrysaetos) nesting territories. Examination of the hypothesis of system equilibrium suggests that the system satisfies equilibrium conditions. Estimates of vital rates obtained using patch occupancy models are used to estimate equilibrium patch occupancy of eagles. We then compute estimates of sensitivity metrics and discuss their implications for eagle population ecology and management. Finally, we discuss the intuition underlying our sensitivity metrics and then provide examples of ecological questions that can be addressed using perturbation analyses. For instance, the sensitivity metrics lead to predictions about the relative importance of local colonization and local extinction probabilities in influencing equilibrium occupancy for rare and common species.

  11. Sensitivity analysis of reactive ecological dynamics.

    PubMed

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  12. Multiple-foil microabrasion package (A0023)

    NASA Technical Reports Server (NTRS)

    Mcdonnell, J. A. M.; Ashworth, D. G.; Carey, W. C.; Flavill, R. P.; Jennison, R. C.

    1984-01-01

    The specific scientific objectives of this experiment are to measure the spatial distribution, size, velocity, radiance, and composition of microparticles in near-Earth space. The technological objectives are to measure erosion rates resulting from microparticle impacts and to evaluate thin-foil meteor 'bumpers'. The combinations of sensitivity and reliability in this experiment will provide up to 1000 impacts per month for laboratory analysis and will extend current sensitivity limits by 5 orders of magnitude in mass.

  13. Antibody-Based Sensors: Principles, Problems and Potential for Detection of Pathogens and Associated Toxins

    PubMed Central

    Byrne, Barry; Stack, Edwina; Gilmartin, Niamh; O'Kennedy, Richard

    2009-01-01

    Antibody-based sensors permit the rapid and sensitive analysis of a range of pathogens and associated toxins. A critical assessment of the implementation of such formats is provided, with reference to their principles, problems and potential for ‘on-site’ analysis. Particular emphasis is placed on the detection of foodborne bacterial pathogens, such as Escherichia coli and Listeria monocytogenes, and additional examples relating to the monitoring of fungal pathogens, viruses, mycotoxins, marine toxins and parasites are also provided. PMID:22408533

  14. Analyzing reflective narratives to assess the ethical reasoning of pediatric residents.

    PubMed

    Moon, Margaret; Taylor, Holly A; McDonald, Erin L; Hughes, Mark T; Beach, Mary Catherine; Carrese, Joseph A

    2013-01-01

    A limiting factor in ethics education in medical training has been difficulty in assessing competence in ethics. This study was conducted to test the concept that content analysis of pediatric residents' personal reflections about ethics experiences can identify changes in ethical sensitivity and reasoning over time. Analysis of written narratives focused on two of our ethics curriculum's goals: 1) To raise sensitivity to ethical issues in everyday clinical practice and 2) to enhance critical reflection on personal and professional values as they affect patient care. Content analysis of written reflections was guided by a tool developed to identify and assess the level of ethical reasoning in eight domains determined to be important aspects of ethical competence. Based on the assessment of narratives written at two times (12 to 16 months/apart) during their training, residents showed significant progress in two specific domains: use of professional values, and use of personal values. Residents did not show decline in ethical reasoning in any domain. This study demonstrates that content analysis of personal narratives may provide a useful method for assessment of developing ethical sensitivity and reasoning.

  15. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    PubMed

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  16. SPR based hybrid electro-optic biosensor for β-lactam antibiotics determination in water

    NASA Astrophysics Data System (ADS)

    Galatus, Ramona; Feier, Bogdan; Cristea, Cecilia; Cennamo, Nunzio; Zeni, Luigi

    2017-09-01

    The present work aims to provide a hybrid platform capable of complementary and sensitive detection of β-lactam antibiotics, ampicillin in particular. The use of an aptamer specific to ampicillin assures good selectivity and sensitivity for the detection of ampicillin from different matrice. This new approach is dedicated for a portable, remote sensing platform based on low-cost, small size and low-power consumption solution. The simple experimental hybrid platform integrates the results from the D-shape surface plasmon resonance plastic optical fiber (SPR-POF) and from the electrochemical (bio)sensor, for the analysis of ampicillin, delivering sensitive and reliable results. The SPR-POF already used in many previous applications is embedded in a new experimental setup with fluorescent fibers emitters, for broadband wavelength analysis, low-power consumption and low-heating capabilities of the sensing platform.

  17. Uncovering psychosocial needs: perspectives of Australian child and family health nurses in a sustained home visiting trial.

    PubMed

    Kardamanidis, Katina; Kemp, Lynn; Schmied, Virginia

    2009-08-01

    The first Australian trial of sustained nurse home visiting provided an opportunity to explore nurses' understanding of the situations that support mothers of infants to disclose personal and sensitive psychosocial information. Using a qualitative descriptive design, semi-structured interviews were conducted and transcripts were analysed drawing upon aspects of Smith's interpretative phenomenological analysis. Five themes pertaining to the experience of relationship building to foster disclosure of sensitive information emerged: (1) building trust is an ongoing process of giving and giving in return, (2) being 'actively passive' to develop trust, (3) the client is in control of the trust-relationship, (4) the association between disclosure of sensitive issues and a trust-relationship, and (5) empowerment over disclosure. This study provides a deeper understanding of how child and family health nurses develop relationships that lead women to entrust the nurse with personal, sensitive information, and may inform the practice of psychosocial needs assessment in other contexts.

  18. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  19. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  20. Refractive collimation beam shaper design and sensitivity analysis using a free-form profile construction method.

    PubMed

    Tsai, Chung-Yu

    2017-07-01

    A refractive laser beam shaper comprising two free-form profiles is presented. The profiles are designed using a free-form profile construction method such that each incident ray is directed in a certain user-specified direction or to a particular point on the target surface so as to achieve the required illumination distribution of the output beam. The validity of the proposed design method is demonstrated by means of ZEMAX simulations. The method is mathematically straightforward and easily implemented in computer code. It thus provides a convenient tool for the design and sensitivity analysis of laser beam shapers and similar optical components.

  1. Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.

    PubMed Central

    Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W

    1996-01-01

    Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227

  2. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  3. Linked Sensitivity Analysis, Calibration, and Uncertainty Analysis Using a System Dynamics Model for Stroke Comparative Effectiveness Research.

    PubMed

    Tian, Yuan; Hassmiller Lich, Kristen; Osgood, Nathaniel D; Eom, Kirsten; Matchar, David B

    2016-11-01

    As health services researchers and decision makers tackle more difficult problems using simulation models, the number of parameters and the corresponding degree of uncertainty have increased. This often results in reduced confidence in such complex models to guide decision making. To demonstrate a systematic approach of linked sensitivity analysis, calibration, and uncertainty analysis to improve confidence in complex models. Four techniques were integrated and applied to a System Dynamics stroke model of US veterans, which was developed to inform systemwide intervention and research planning: Morris method (sensitivity analysis), multistart Powell hill-climbing algorithm and generalized likelihood uncertainty estimation (calibration), and Monte Carlo simulation (uncertainty analysis). Of 60 uncertain parameters, sensitivity analysis identified 29 needing calibration, 7 that did not need calibration but significantly influenced key stroke outcomes, and 24 not influential to calibration or stroke outcomes that were fixed at their best guess values. One thousand alternative well-calibrated baselines were obtained to reflect calibration uncertainty and brought into uncertainty analysis. The initial stroke incidence rate among veterans was identified as the most influential uncertain parameter, for which further data should be collected. That said, accounting for current uncertainty, the analysis of 15 distinct prevention and treatment interventions provided a robust conclusion that hypertension control for all veterans would yield the largest gain in quality-adjusted life years. For complex health care models, a mixed approach was applied to examine the uncertainty surrounding key stroke outcomes and the robustness of conclusions. We demonstrate that this rigorous approach can be practical and advocate for such analysis to promote understanding of the limits of certainty in applying models to current decisions and to guide future data collection. © The Author(s) 2016.

  4. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  5. Photoacoustic Spectroscopy Analysis of Traditional Chinese Medicine

    NASA Astrophysics Data System (ADS)

    Chen, Lu; Zhao, Bin-xing; Xiao, Hong-tao; Tong, Rong-sheng; Gao, Chun-ming

    2013-09-01

    Chinese medicine is a historic cultural legacy of China. It has made a significant contribution to medicine and healthcare for generations. The development of Chinese herbal medicine analysis is emphasized by the Chinese pharmaceutical industry. This study has carried out the experimental analysis of ten kinds of Chinese herbal powder including Fritillaria powder, etc., based on the photoacoustic spectroscopy (PAS) method. First, a photoacoustic spectroscopy system was designed and constructed, especially a highly sensitive solid photoacoustic cell was established. Second, the experimental setup was verified through the characteristic emission spectrum of the light source, obtained by using carbon as a sample in the photoacoustic cell. Finally, as the photoacoustic spectroscopy analysis of Fritillaria, etc., was completed, the specificity of the Chinese herb medicine analysis was verified. This study shows that the PAS can provide a valid, highly sensitive analytical method for the specificity of Chinese herb medicine without preparing and damaging samples.

  6. Health economic assessment: a methodological primer.

    PubMed

    Simoens, Steven

    2009-12-01

    This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.

  7. Health Economic Assessment: A Methodological Primer

    PubMed Central

    Simoens, Steven

    2009-01-01

    This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237

  8. X-Ray Polarimetry with GEMS

    NASA Technical Reports Server (NTRS)

    Strohmayer, Tod

    2011-01-01

    The polarization properties of cosmic X-ray sources are still largely unexplored. The Gravity and Extreme Magnetism SMEX (GEMS) will carry out the first sensitive X-ray polarization survey of a wide range of sources including; accreting compact objects (black holes and neutron stars), AGN, supernova remnants, magnetars and rotation-powered pulsars. GEMS employs grazing-incidence foil mirrors and novel time-projection chamber (TPC) polarimeters leveraging the photoelectric effect to achieve high polarization sensitivity in the 2 - 10 keV band. I will provide an update of the project status, illustrate the expected performance with several science examples, and provide a brief overview of the data analysis challenges

  9. Significance of dual polarized long wavelength radar for terrain analysis

    NASA Technical Reports Server (NTRS)

    Macdonald, H. C.; Waite, W. P.

    1978-01-01

    Long wavelength systems with improved penetration capability have been considered to have the potential for minimizing the vegetation contribution and enhancing the surface return variations. L-band imagery of the Arkansas geologic test site provides confirmatory evidence of this effect. However, the increased wavelength increases the sensitivity to larger scale structure at relatively small incidence angles. The regularity of agricultural and urban scenes provides large components in the low frequency-large scale portion of the roughness spectrum that are highly sensitive to orientation. The addition of a cross polarized channel is shown to enable the interpreter to distinguish vegetation and orientational perturbations in the surface return.

  10. Design and analysis for detection monitoring of forest health

    Treesearch

    F. A. Roesch

    1995-01-01

    An analysis procedure is proposed for the sample design of the Forest Health Monitoring Program (FHM) in the United States. The procedure is intended to provide increased sensitivity to localized but potentially important changes in forest health by explicitly accounting for the spatial relationships between plots in the FHM design. After a series of median sweeps...

  11. TBA PRODUCTION BY ACID HYDROLYSIS OF MTBE DURING HEATED HEADSPACE ANALYSIS & EVALUATION OF A BASE AS A PRESERVATIVE

    EPA Science Inventory

    At room temperature (20°±3°C), purge and trap samplers provide poor sensitivity for analysis of the fuel oxygenates that are alcohols, such as tertiary butyl alcohol (TBA). Because alcohols are miscible or highly soluble in water, they are not efficiently transferred to a gas chr...

  12. 7 CFR 1710.302 - Financial forecasts-power supply borrowers.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... facilities; (3) Provide an in-depth analysis of the regional markets for power if loan feasibility depends to any degree on a borrower's ability to sell surplus power while its system loads grow to meet the... sensitivity analysis if required by RUS pursuant to § 1710.300(d)(5). (e) The projections shall be coordinated...

  13. Temperature-independent fiber-Bragg-grating-based atmospheric pressure sensor

    NASA Astrophysics Data System (ADS)

    Zhang, Zhiguo; Shen, Chunyan; Li, Luming

    2018-03-01

    Atmospheric pressure is an important way to achieve a high degree of measurement for modern aircrafts, moreover, it is also an indispensable parameter in the meteorological telemetry system. With the development of society, people are increasingly concerned about the weather. Accurate and convenient atmospheric pressure parameters can provide strong support for meteorological analysis. However, electronic atmospheric pressure sensors currently in application suffer from several shortcomings. After an analysis and discussion, we propose an innovative structural design, in which a vacuum membrane box and a temperature-independent strain sensor based on an equal strength cantilever beam structure and fiber Bragg grating (FBG) sensors are used. We provide experimental verification of that the atmospheric pressure sensor device has the characteristics of a simple structure, lack of an external power supply, automatic temperature compensation, and high sensitivity. The sensor system has good sensitivity, which can be up to 100 nm/MPa, and repeatability. In addition, the device exhibits desired hysteresis.

  14. Visible and Extended Near-Infrared Multispectral Imaging for Skin Cancer Diagnosis

    PubMed Central

    Rey-Barroso, Laura; Burgos-Fernández, Francisco J.; Delpueyo, Xana; Ares, Miguel; Malvehy, Josep; Puig, Susana

    2018-01-01

    With the goal of diagnosing skin cancer in an early and noninvasive way, an extended near infrared multispectral imaging system based on an InGaAs sensor with sensitivity from 995 nm to 1613 nm was built to evaluate deeper skin layers thanks to the higher penetration of photons at these wavelengths. The outcomes of this device were combined with those of a previously developed multispectral system that works in the visible and near infrared range (414 nm–995 nm). Both provide spectral and spatial information from skin lesions. A classification method to discriminate between melanomas and nevi was developed based on the analysis of first-order statistics descriptors, principal component analysis, and support vector machine tools. The system provided a sensitivity of 78.6% and a specificity of 84.6%, the latter one being improved with respect to that offered by silicon sensors. PMID:29734747

  15. Atypical antipsychotics, insulin resistance and weight; a meta-analysis of healthy volunteer studies.

    PubMed

    Burghardt, Kyle J; Seyoum, Berhane; Mallisho, Abdullah; Burghardt, Paul R; Kowluru, Renu A; Yi, Zhengping

    2018-04-20

    Atypical antipsychotics increase the risk of diabetes and cardiovascular disease through their side effects of insulin resistance and weight gain. The populations for which atypical antipsychotics are used carry a baseline risk of metabolic dysregulation prior to medication which has made it difficult to fully understand whether atypical antipsychotics cause insulin resistance and weight gain directly. The purpose of this work was to conduct a systematic review and meta-analysis of atypical antipsychotic trials in healthy volunteers to better understand their effects on insulin sensitivity and weight gain. Furthermore, we aimed to evaluate the occurrence of insulin resistance with or without weight gain and with treatment length by using subgroup and meta-regression techniques. Overall, the meta-analysis provides evidence that atypical antipsychotics decrease insulin sensitivity (standardized mean difference=-0.437, p<0.001) and increase weight (standardized mean difference=0.591, p<0.001) in healthy volunteers. It was found that decreases in insulin sensitivity were potentially dependent on treatment length but not weight gain. Decreases in insulin sensitivity occurred in multi-dose studies <13days while weight gain occurred in studies 14days and longer (max 28days). These findings provide preliminary evidence that atypical antipsychotics cause insulin resistance and weight gain directly, independent of psychiatric disease and may be associated with length of treatment. Further, well-designed studies to assess the co-occurrence of insulin resistance and weight gain and to understand the mechanisms and sequence by which they occur are required. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST)

    PubMed Central

    Xu, Chonggang; Gertner, George

    2013-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements. PMID:24143037

  17. Understanding and comparisons of different sampling approaches for the Fourier Amplitudes Sensitivity Test (FAST).

    PubMed

    Xu, Chonggang; Gertner, George

    2011-01-01

    Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.

  18. Individual Test Point Fluctuations of Macular Sensitivity in Healthy Eyes and Eyes With Age-Related Macular Degeneration Measured With Microperimetry.

    PubMed

    Barboni, Mirella Telles Salgueiro; Szepessy, Zsuzsanna; Ventura, Dora Fix; Németh, János

    2018-04-01

    To establish fluctuation limits, it was considered that not only overall macular sensitivity but also fluctuations of individual test points in the macula might have clinical value. Three repeated measurements of microperimetry were performed using the Standard Expert test of Macular Integrity Assessment (MAIA) in healthy subjects ( N = 12, age = 23.8 ± 1.5 years old) and in patients with age-related macular degeneration (AMD) ( N = 11, age = 68.5 ± 7.4 years old). A total of 37 macular points arranged in four concentric rings and in four quadrants were analyzed individually and in groups. The data show low fluctuation of macular sensitivity of individual test points in healthy subjects (average = 1.38 ± 0.28 dB) and AMD patients (average = 2.12 ± 0.60 dB). Lower sensitivity points are more related to higher fluctuation than to the distance from the central point. Fixation stability showed no effect on the sensitivity fluctuation. The 95th percentile of the standard deviations of healthy subjects was, on average, 2.7 dB, ranging from 1.2 to 4 dB, depending on the point tested. Point analysis and regional analysis might be considered prior to evaluating macular sensitivity fluctuation in order to distinguish between normal variation and a clinical change. S tatistical methods were used to compare repeated microperimetry measurements and to establish fluctuation limits of the macular sensitivity. This analysis could add information regarding the integrity of different macular areas and provide new insights into fixation points prior to the biofeedback fixation training.

  19. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  20. Quantitative sensory testing response patterns to capsaicin- and ultraviolet-B-induced local skin hypersensitization in healthy subjects: a machine-learned analysis.

    PubMed

    Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G; Ultsch, Alfred

    2017-08-16

    The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

  1. Extending Raman's reach: enabling applications via greater sensitivity and speed

    NASA Astrophysics Data System (ADS)

    Creasey, David; Sullivan, Mike; Paul, Chris; Rathmell, Cicely

    2018-02-01

    Over the last decade, miniature fiber optic spectrometers have greatly expanded the ability of Raman spectroscopy to tackle practical applications in the field, from mobile pharmaceutical ID to hazardous material assessment in remote locations. There remains a gap, however, between the typical diode array spectrometer and their more sensitive benchtop analogs. High sensitivity, cooled Raman spectrometers have the potential to narrow that gap by providing greater sensitivity, better SNR, and faster measurement times. In this paper, we'll look at the key factors in the design of high sensitivity miniature Raman spectrometers and their associated accessories, as well as the key metric for direct comparison of these systems - limit of detection. With the availability of our high sensitivity Raman systems operating at wavelengths from the UV to NIR, many applications are now becoming practical in the field, from trace level detection to analysis of complex biological samples.

  2. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  3. Association between atopic dermatitis and contact sensitization: A systematic review and meta-analysis.

    PubMed

    Hamann, Carsten R; Hamann, Dathan; Egeberg, Alexander; Johansen, Jeanne D; Silverberg, Jonathan; Thyssen, Jacob P

    2017-07-01

    It is unclear whether patients with atopic dermatitis (AD) have an altered prevalence or risk for contact sensitization. Increased exposure to chemicals in topical products together with impaired skin barrier function suggest a higher risk, whereas the immune profile suggests a lower risk. To perform a systematic review and meta-analysis of the association between AD and contact sensitization. The PubMed/Medline, Embase, and Cochrane databases were searched for articles that reported on contact sensitization in individuals with and without AD. The literature search yielded 10,083 citations; 417 were selected based on title and abstract screening and 74 met inclusion criteria. In a pooled analysis, no significant difference in contact sensitization between AD and controls was evident (random effects model odds ratio [OR] = 0.891; 95% confidence interval [CI] = 0.771-1.03). There was a positive correlation in studies that compared AD patients with individuals from the general population (OR 1.50, 95% CI 1.23-1.93) but an inverse association when comparing with referred populations (OR 0.753, 95% CI 0.63-0.90). Included studies used different tools to diagnose AD and did not always provide information on current or past disease. Patch test allergens varied between studies. No overall relationship between AD and contact sensitization was found. We recommend that clinicians consider patch testing AD patients when allergic contact dermatitis is suspected. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  4. Community structure analysis of rejection sensitive personality profiles: A common neural response to social evaluative threat?

    PubMed

    Kortink, Elise D; Weeda, Wouter D; Crowley, Michael J; Gunther Moor, Bregtje; van der Molen, Melle J W

    2018-06-01

    Monitoring social threat is essential for maintaining healthy social relationships, and recent studies suggest a neural alarm system that governs our response to social rejection. Frontal-midline theta (4-8 Hz) oscillatory power might act as a neural correlate of this system by being sensitive to unexpected social rejection. Here, we examined whether frontal-midline theta is modulated by individual differences in personality constructs sensitive to social disconnection. In addition, we examined the sensitivity of feedback-related brain potentials (i.e., the feedback-related negativity and P3) to social feedback. Sixty-five undergraduate female participants (mean age = 19.69 years) participated in the Social Judgment Paradigm, a fictitious peer-evaluation task in which participants provided expectancies about being liked/disliked by peer strangers. Thereafter, they received feedback signaling social acceptance/rejection. A community structure analysis was employed to delineate personality profiles in our data. Results provided evidence of two subgroups: one group scored high on attachment-related anxiety and fear of negative evaluation, whereas the other group scored high on attachment-related avoidance and low on fear of negative evaluation. In both groups, unexpected rejection feedback yielded a significant increase in theta power. The feedback-related negativity was sensitive to unexpected feedback, regardless of valence, and was largest for unexpected rejection feedback. The feedback-related P3 was significantly enhanced in response to expected social acceptance feedback. Together, these findings confirm the sensitivity of frontal midline theta oscillations to the processing of social threat, and suggest that this alleged neural alarm system behaves similarly in individuals that differ in personality constructs relevant to social evaluation.

  5. Characterization of breast lesion using T1-perfusion magnetic resonance imaging: Qualitative vs. quantitative analysis.

    PubMed

    Thakran, S; Gupta, P K; Kabra, V; Saha, I; Jain, P; Gupta, R K; Singh, A

    2018-06-14

    The objective of this study was to quantify the hemodynamic parameters using first pass analysis of T 1 -perfusion magnetic resonance imaging (MRI) data of human breast and to compare these parameters with the existing tracer kinetic parameters, semi-quantitative and qualitative T 1 -perfusion analysis in terms of lesion characterization. MRI of the breast was performed in 50 women (mean age, 44±11 [SD] years; range: 26-75) years with a total of 15 benign and 35 malignant breast lesions. After pre-processing, T 1 -perfusion MRI data was analyzed using qualitative approach by two radiologists (visual inspection of the kinetic curve into types I, II or III), semi-quantitative (characterization of kinetic curve types using empirical parameters), generalized-tracer-kinetic-model (tracer kinetic parameters) and first pass analysis (hemodynamic-parameters). Chi-squared test, t-test, one-way analysis-of-variance (ANOVA) using Bonferroni post-hoc test and receiver-operating-characteristic (ROC) curve were used for statistical analysis. All quantitative parameters except leakage volume (Ve), qualitative (type-I and III) and semi-quantitative curves (type-I and III) provided significant differences (P<0.05) between benign and malignant lesions. Kinetic parameters, particularly volume transfer coefficient (K trans ) provided a significant difference (P<0.05) between all grades except grade-II vs III. The hemodynamic parameter (relative-leakage-corrected-breast-blood-volume [rBBVcorr) provided a statistically significant difference (P<0.05) between all grades. It also provided highest sensitivity and specificity among all parameters in differentiation between different grades of malignant breast lesions. Quantitative parameters, particularly rBBVcorr and K trans provided similar sensitivity and specificity in differentiating benign from malignant breast lesions for this cohort. Moreover, rBBVcorr provided better differentiation between different grades of malignant breast lesions among all the parameters. Copyright © 2018. Published by Elsevier Masson SAS.

  6. Eddy Fluxes and Sensitivity of the Water Cycle to Spatial Resolution in Idealized Regional Aquaplanet Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Leung, Lai-Yung R.; Gustafson, William I.

    2014-02-28

    A multi-scale moisture budget analysis is used to identify the mechanisms responsible for the sensitivity of the water cycle to spatial resolution using idealized regional aquaplanet simulations. In the higher resolution simulations, moisture transport by eddies fluxes dry the boundary layer enhancing evaporation and precipitation. This effect of eddies, which is underestimated by the physics parameterizations in the low-resolution simulations, is found to be responsible for the sensitivity of the water cycle both directly, and through its upscale effect, on the mean circulation. Correlations among moisture transport by eddies at adjacent ranges of scales provides the potential for reducing thismore » sensitivity by representing the unresolved eddies by their marginally resolved counterparts.« less

  7. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy

    PubMed Central

    Cook, Michael J; Puri, Basant K

    2016-01-01

    The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571

  8. Technical needs assessment: UWMC's sensitivity analysis guides decision-making.

    PubMed

    Alotis, Michael

    2003-01-01

    In today's healthcare market, it is critical for provider institutions to offer the latest and best technological services while remaining fiscally sound. In academic practices, like the University of Washington Medical Center (UWMC), there are the added responsibilities of teaching and research that require a high-tech environment to thrive. These conditions and needs require extensive analysis of not only what equipment to buy, but also when and how it should be acquired. In an organization like the UWMC, which has strategically positioned itself for growth, it is useful to build a sensitivity analysis based on the strategic plan. A common forecasting tool, the sensitivity analysis lays out existing and projected business operations with volume assumptions displayed in layers. Each layer of current and projected activity is plotted over time and placed against a background depicting the capacity of the key modality. Key elements of a sensitivity analysis include necessity, economic assessment, performance, compatibility, reliability, service and training. There are two major triggers that cause us to consider the purchase of new imaging equipment and that determine how to evaluate the equipment we buy. One trigger revolves around our ability to serve patients by seeing them on a timely basis. If we find a significant gap between demand and our capacity to meet it, or anticipate a greater increased demand based upon trends, we begin to consider enhancing that capacity. A second trigger is the release of a breakthrough or substantially improved technology that will clearly have a positive impact on clinical efficacy and efficiency, thereby benefiting the patient. Especially in radiology departments, where many technologies require large expenditures, it is no longer acceptable simply to spend on new and improved technologies. It is necessary to justify them as a strong investment in clinical management and efficacy. There is pressure to provide "proof" at the department level and beyond. By applying sensitivity analysis and other forecasting methods, we are able to spend our resources judiciously in order to get the equipment we need when we need it. This helps ensure that we have efficacious, efficient systems--and enough of them--so that our patients are examined on a timely basis and our clinics run smoothly. It also goes a long way toward making certain that the best equipment is available to our clinicians, researchers, students and patients alike.

  9. Analysis of air-, moisture- and solvent-sensitive chemical compounds by mass spectrometry using an inert atmospheric pressure solids analysis probe.

    PubMed

    Mosely, Jackie A; Stokes, Peter; Parker, David; Dyer, Philip W; Messinis, Antonis M

    2018-02-01

    A novel method has been developed that enables chemical compounds to be transferred from an inert atmosphere glove box and into the atmospheric pressure ion source of a mass spectrometer whilst retaining a controlled chemical environment. This innovative method is simple and cheap to implement on some commercially available mass spectrometers. We have termed this approach inert atmospheric pressure solids analysis probe ( iASAP) and demonstrate the benefit of this methodology for two air-/moisture-sensitive chemical compounds whose characterisation by mass spectrometry is now possible and easily achieved. The simplicity of the design means that moving between iASAP and standard ASAP is straightforward and quick, providing a highly flexible platform with rapid sample turnaround.

  10. Language evaluation protocol for children aged 2 months to 23 months: analysis of sensitivity and specificity.

    PubMed

    Labanca, Ludimila; Alves, Cláudia Regina Lindgren; Bragança, Lidia Lourenço Cunha; Dorim, Diego Dias Ramos; Alvim, Cristina Gonçalves; Lemos, Stela Maris Aguiar

    2015-01-01

    To establish cutoff points for the analysis of the Behavior Observation Form (BOF) of children in the ages of 2 to 23 months and evaluate the sensitivity and specificity by age group and domains (Emission, Reception, and Cognitive Aspects of Language). The sample consisted of 752 children who underwent BOF. Each child was classified as having appropriate language development for the age or having possible risk of language impairment. Performance Indicators (PI) were calculated in each domain as well as the overall PI in all domains. The values for sensitivity and specificity were also calculated. The cutoff points for possible risk of language impairment for each domain and each age group were obtained using the receiver operating characteristics curve. The results of the study revealed that one-third of the assessed children have a risk of language impairment in the first two years of life. The analysis of BOF showed high sensitivity (>90%) in all categories and in all age groups; however, the chance of false-positive results was higher than 20% in the majority of aspects evaluated. It was possible to establish the cutoff points for all categories and age groups with good correlation between sensitivity and specificity, except for the age group of 2 to 6 months. This study provides important contributions to the discussion on the evaluation of the language development of children younger than 2 years.

  11. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  12. Sensitivity of combustion and ignition characteristics of the solid-fuel charge of the microelectromechanical system of a microthruster to macrokinetic and design parameters

    NASA Astrophysics Data System (ADS)

    Futko, S. I.; Ermolaeva, E. M.; Dobrego, K. V.; Bondarenko, V. P.; Dolgii, L. N.

    2012-07-01

    We have developed a sensitivity analysis permitting effective estimation of the change in the impulse responses of a microthrusters and in the ignition characteristics of the solid-fuel charge caused by the variation of the basic macrokinetic parameters of the mixed fuel and the design parameters of the microthruster's combustion chamber. On the basis of the proposed sensitivity analysis, we have estimated the spread of both the propulsive force and impulse and the induction period and self-ignition temperature depending on the macrokinetic parameters of combustion (pre-exponential factor, activation energy, density, and heat content) of the solid-fuel charge of the microthruster. The obtained results can be used for rapid and effective estimation of the spread of goal functions to provide stable physicochemical characteristics and impulse responses of solid-fuel mixtures in making and using microthrusters.

  13. Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality

    NASA Astrophysics Data System (ADS)

    Pratama, Mega Aria; Rosyidi, Cucuk Nur

    2017-11-01

    This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.

  14. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    PubMed

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  15. Characterization of image heterogeneity using 2D Minkowski functionals increases the sensitivity of detection of a targeted MRI contrast agent.

    PubMed

    Canuto, Holly C; McLachlan, Charles; Kettunen, Mikko I; Velic, Marko; Krishnan, Anant S; Neves, Andre' A; de Backer, Maaike; Hu, D-E; Hobson, Michael P; Brindle, Kevin M

    2009-05-01

    A targeted Gd(3+)-based contrast agent has been developed that detects tumor cell death by binding to the phosphatidylserine (PS) exposed on the plasma membrane of dying cells. Although this agent has been used to detect tumor cell death in vivo, the differences in signal intensity between treated and untreated tumors was relatively small. As cell death is often spatially heterogeneous within tumors, we investigated whether an image analysis technique that parameterizes heterogeneity could be used to increase the sensitivity of detection of this targeted contrast agent. Two-dimensional (2D) Minkowski functionals (MFs) provided an automated and reliable method for parameterization of image heterogeneity, which does not require prior assumptions about the number of regions or features in the image, and were shown to increase the sensitivity of detection of the contrast agent as compared to simple signal intensity analysis. (c) 2009 Wiley-Liss, Inc.

  16. Shot noise-limited Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry

    NASA Astrophysics Data System (ADS)

    Chen, Shichao; Zhu, Yizheng

    2017-02-01

    Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.

  17. Recognition of central sensitization in patients with musculoskeletal pain: Application of pain neurophysiology in manual therapy practice.

    PubMed

    Nijs, Jo; Van Houdenhove, Boudewijn; Oostendorp, Rob A B

    2010-04-01

    Central sensitization plays an important role in the pathophysiology of numerous musculoskeletal pain disorders, yet it remains unclear how manual therapists can recognize this condition. Therefore, mechanism based clinical guidelines for the recognition of central sensitization in patients with musculoskeletal pain are provided. By using our current understanding of central sensitization during the clinical assessment of patients with musculoskeletal pain, manual therapists can apply the science of nociceptive and pain processing neurophysiology to the practice of manual therapy. The diagnosis/assessment of central sensitization in individual patients with musculoskeletal pain is not straightforward, however manual therapists can use information obtained from the medical diagnosis, combined with the medical history of the patient, as well as the clinical examination and the analysis of the treatment response in order to recognize central sensitization. The clinical examination used to recognize central sensitization entails the distinction between primary and secondary hyperalgesia. Copyright 2009 Elsevier Ltd. All rights reserved.

  18. Guide to the economic analysis of community energy systems

    NASA Astrophysics Data System (ADS)

    Pferdehirt, W. P.; Croke, K. G.; Hurter, A. P.; Kennedy, A. S.; Lee, C.

    1981-08-01

    This guidebook provides a framework for the economic analysis of community energy systems. The analysis facilitates a comparison of competing configurations in community energy systems, as well as a comparison with conventional energy systems. Various components of costs and revenues to be considered are discussed in detail. Computational procedures and accompanying worksheets are provided for calculating the net present value, straight and discounted payback periods, the rate of return, and the savings to investment ratio for the proposed energy system alternatives. These computations are based on a projection of the system's costs and revenues over its economic lifetimes. The guidebook also discusses the sensitivity of the results of this economic analysis to changes in various parameters and assumptions.

  19. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    NASA Astrophysics Data System (ADS)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  20. Confidence Intervals for the Between-Study Variance in Random Effects Meta-Analysis Using Generalised Cochran Heterogeneity Statistics

    ERIC Educational Resources Information Center

    Jackson, Dan

    2013-01-01

    Statistical inference is problematic in the common situation in meta-analysis where the random effects model is fitted to just a handful of studies. In particular, the asymptotic theory of maximum likelihood provides a poor approximation, and Bayesian methods are sensitive to the prior specification. Hence, less efficient, but easily computed and…

  1. Sensitivity to musical structure in the human brain

    PubMed Central

    McDermott, Josh H.; Norman-Haignere, Sam; Kanwisher, Nancy

    2012-01-01

    Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes. PMID:23019005

  2. IN SITU SOLID-PHASE EXTRACTION AND ANALYSIS OF ULTRA-TRACE SYNTHETIC MUSKS IN MUNICIPAL SEWAGE EFFLUENT USING GAS CHROMATOGRAPHY-MASS SPECTROMETRY, FULL-SCAN MODE

    EPA Science Inventory



    Fragrance materials, such as synthetic musks in aqueous samples, are normally analyzed by GC/MS in the selected ion monitoring (SIM) mode to provide maximum sensitivity after liquid-liquid extraction of 1-L samples. A 1-L sample, however, usually provides too little ana...

  3. ON-SITE SOLID PHRASE EXTRACTION AND LABORATORY ANALYSIS OF ULTRA-TRACE SYNTHETIC MUSKS IN MUNICIPAL SEWAGE EFFLUENT USING GAS CHROMATOGRAPHY-MASS SPECTROMETRY, FULL-SCAN MODE

    EPA Science Inventory



    Fragrance materials, such as synthetic musks in aqueous samples, are normally analyzed by GC/MS in the selected ion monitoring (SIM) mode to provide maximum sensitivity after liquid-liquid extraction of I -L samples. A I -L sample, however, usually provides too little ana...

  4. Identification and Sensitivity Analysis for Average Causal Mediation Effects with Time-Varying Treatments and Mediators: Investigating the Underlying Mechanisms of Kindergarten Retention Policy.

    PubMed

    Park, Soojin; Steiner, Peter M; Kaplan, David

    2018-06-01

    Considering that causal mechanisms unfold over time, it is important to investigate the mechanisms over time, taking into account the time-varying features of treatments and mediators. However, identification of the average causal mediation effect in the presence of time-varying treatments and mediators is often complicated by time-varying confounding. This article aims to provide a novel approach to uncovering causal mechanisms in time-varying treatments and mediators in the presence of time-varying confounding. We provide different strategies for identification and sensitivity analysis under homogeneous and heterogeneous effects. Homogeneous effects are those in which each individual experiences the same effect, and heterogeneous effects are those in which the effects vary over individuals. Most importantly, we provide an alternative definition of average causal mediation effects that evaluates a partial mediation effect; the effect that is mediated by paths other than through an intermediate confounding variable. We argue that this alternative definition allows us to better assess at least a part of the mediated effect and provides meaningful and unique interpretations. A case study using ECLS-K data that evaluates kindergarten retention policy is offered to illustrate our proposed approach.

  5. Competing for Quality: An Analysis of the Comparability of Public School Teacher Salaries to Earning Opportunities in Other Occupations. Occasional Papers in Educational Policy Analysis. Paper No. 415.

    ERIC Educational Resources Information Center

    Bird, Ronald E.

    This paper describes an initial effort to provide a carefully reasoned, factually based, systematic analysis of teacher pay in comparison to pay in other occupations available to college-educated workers. It also reports on the sensitivity of these salary comparison estimates to differences in certain characteristics of the labor force, such as…

  6. Sensitivity of the Positive and Negative Syndrome Scale (PANSS) in Detecting Treatment Effects via Network Analysis.

    PubMed

    Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P

    2017-12-01

    Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.

  7. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  8. The countermovement jump to monitor neuromuscular status: A meta-analysis.

    PubMed

    Claudino, João Gustavo; Cronin, John; Mezêncio, Bruno; McMaster, Daniel Travis; McGuigan, Michael; Tricoli, Valmor; Amadio, Alberto Carlos; Serrão, Julio Cerca

    2017-04-01

    The primary objective of this meta-analysis was to compare countermovement jump (CMJ) performance in studies that reported the highest value as opposed to average value for the purposes of monitoring neuromuscular status (i.e., fatigue and supercompensation). The secondary aim was to determine the sensitivity of the dependent variables. Systematic review with meta-analysis. The meta-analysis was conducted on the highest or average of a number of CMJ variables. Multiple literature searches were undertaken in Pubmed, Scopus, and Web of Science to identify articles utilizing CMJ to monitor training status. Effect sizes (ES) with 95% confidence interval (95% CI) were calculated using the mean and standard deviation of the pre- and post-testing data. The coefficient of variation (CV) with 95% CI was also calculated to assess the level of instability of each variable. Heterogeneity was assessed using a random-effects model. 151 articles were included providing a total of 531 ESs for the meta-analyses; 85.4% of articles used highest CMJ height, 13.2% used average and 1.3% used both when reporting changes in CMJ performance. Based on the meta-analysis, average CMJ height was more sensitive than highest CMJ height in detecting CMJ fatigue and supercompensation. Furthermore, other CMJ variables such as peak power, mean power, peak velocity, peak force, mean impulse, and power were sensitive in tracking the supercompensation effects of training. The average CMJ height was more sensitive than highest CMJ height in monitoring neuromuscular status; however, further investigation is needed to determine the sensitivity of other CMJ performance variables. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. Coal resources in environmentally-sensitive lands under federal management

    USGS Publications Warehouse

    Watson, William D.; Tully, John K.; Moser, Edward N.; Dee, David P.; Bryant, Karen; Schall, Richard; Allan, Harold A.

    1995-01-01

    This report presents estimates of coal-bearing acreage and coal tonnage in environmentally-sensitive areas. The analysis was conducted to provide data for rulemaking by the Federal Office of Surface Mining (Watson and others, 1995). The rulemaking clarifies conditions under which coal can be mined in environmentally-sensitive areas. The area of the U.S. is about 2.3 billion acres. Contained within that acreage are certain environmentally-sensitive and unique areas (including parks, forests, and various other Federal land preserves). These areas are afforded special protection under Federal and State law. Altogether these protected areas occupy about 400 million acres. This report assesses coal acreage and coal tonnage in these protected Federal land preserves. Results are presented in the form of 8 map-displays prepared using GIS methods at a national scale. Tables and charts that accompany each map provide estimates of the total acreage in Federal land preserve units that overlap or fall within coal fields, coal-bearing acreage in each unit, and coal tonnage in each unit. Summary charts, compiled from the maps, indicate that about 8% of the Nation's coal reserves are located within environmentally-sensitive Federal land preserves.

  10. Bioimaging of cells and tissues using accelerator-based sources.

    PubMed

    Petibois, Cyril; Cestelli Guidi, Mariangela

    2008-07-01

    A variety of techniques exist that provide chemical information in the form of a spatially resolved image: electron microprobe analysis, nuclear microprobe analysis, synchrotron radiation microprobe analysis, secondary ion mass spectrometry, and confocal fluorescence microscopy. Linear (LINAC) and circular (synchrotrons) particle accelerators have been constructed worldwide to provide to the scientific community unprecedented analytical performances. Now, these facilities match at least one of the three analytical features required for the biological field: (1) a sufficient spatial resolution for single cell (< 1 mum) or tissue (<1 mm) analyses, (2) a temporal resolution to follow molecular dynamics, and (3) a sensitivity in the micromolar to nanomolar range, thus allowing true investigations on biological dynamics. Third-generation synchrotrons now offer the opportunity of bioanalytical measurements at nanometer resolutions with incredible sensitivity. Linear accelerators are more specialized in their physical features but may exceed synchrotron performances. All these techniques have become irreplaceable tools for developing knowledge in biology. This review highlights the pros and cons of the most popular techniques that have been implemented on accelerator-based sources to address analytical issues on biological specimens.

  11. An investigation of the matrix sensitivity of refinery gas analysis using gas chromatography with flame ionisation detection.

    PubMed

    Ferracci, Valerio; Brown, Andrew S; Harris, Peter M; Brown, Richard J C

    2015-02-27

    The response of a flame ionisation detector (FID) on a gas chromatograph to methane, ethane, propane, i-butane and n-butane in a series of multi-component refinery gas standards was investigated to assess the matrix sensitivity of the instrument. High-accuracy synthetic gas standards, traceable to the International System of Units, were used to minimise uncertainties. The instrument response exhibited a small dependence on the component amount fraction: this behaviour, consistent with that of another FID, was thoroughly characterised over a wide range of component amount fractions and was shown to introduce a negligible bias in the analysis of refinery gas samples, provided a suitable reference standard is employed. No significant effects of the molar volume, density and viscosity of the gas mixtures on the instrument response were observed, indicating that the FID is suitable for the analysis of refinery gas mixtures over a wide range of component amount fractions provided that appropriate drift-correction procedures are employed. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Inflammatory Cytokines as Preclinical Markers of Adverse Responses to Chemical Stressors

    EPA Science Inventory

    Abstract: The in vivo cytokine response to chemical stressors is a promising mainstream tool used to assess potential systemic inflammation and immune function changes. Notably, new instrumentation and statistical analysis provide the selectivity and sensitivity to rapidly diff...

  13. Derivation of Continuum Models from An Agent-based Cancer Model: Optimization and Sensitivity Analysis.

    PubMed

    Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank

    2017-01-01

    Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  14. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. A Randomized Controlled Trial of the Effectiveness of Traditional and Mobile Public Health Communications With Health Care Providers.

    PubMed

    Baseman, Janet; Revere, Debra; Painter, Ian; Oberle, Mark; Duchin, Jeffrey; Thiede, Hanne; Nett, Randall; MacEachern, Dorothy; Stergachis, Andy

    2016-02-01

    Health care providers play an essential role in public health emergency preparedness and response. We conducted a 4-year randomized controlled trial to systematically compare the effectiveness of traditional and mobile communication strategies for sending time-sensitive public health messages to providers. Subjects (N=848) included providers who might be leveraged to assist with emergency preparedness and response activities, such as physicians, pharmacists, nurse practitioners, physician's assistants, and veterinarians. Providers were randomly assigned to a group that received time-sensitive quarterly messages via e-mail, fax, or cell phone text messaging (SMS) or to a no-message control group. Follow-up phone interviews elicited information about message receipt, topic recall, and perceived credibility and trustworthiness of message and source. Our main outcome measures were awareness and recall of message content, which was compared across delivery methods. Per-protocol analysis revealed that e-mail messages were recalled at a higher rate than were messaged delivered by fax or SMS, whereas the as-treated analysis found that e-mail and fax groups had similar recall rates and both had higher recall rates than the SMS group. This is the first study to systematically evaluate the relative effectiveness of public health message delivery systems. Our findings provide guidance to improve public health agency communications with providers before, during, and after a public health emergency.

  16. Advances in biological dosimetry

    NASA Astrophysics Data System (ADS)

    Ivashkevich, A.; Ohnesorg, T.; Sparbier, C. E.; Elsaleh, H.

    2017-01-01

    Rapid retrospective biodosimetry methods are essential for the fast triage of persons occupationally or accidentally exposed to ionizing radiation. Identification and detection of a radiation specific molecular ‘footprint’ should provide a sensitive and reliable measurement of radiation exposure. Here we discuss conventional (cytogenetic) methods of detection and assessment of radiation exposure in comparison to emerging approaches such as gene expression signatures and DNA damage markers. Furthermore, we provide an overview of technical and logistic details such as type of sample required, time for sample preparation and analysis, ease of use and potential for a high throughput analysis.

  17. A hydrogeologic framework for characterizing summer streamflow sensitivity to climate warming in the Pacific Northwest, USA

    NASA Astrophysics Data System (ADS)

    Safeeq, M.; Grant, G. E.; Lewis, S. L.; Kramer, M. G.; Staab, B.

    2014-09-01

    Summer streamflows in the Pacific Northwest are largely derived from melting snow and groundwater discharge. As the climate warms, diminishing snowpack and earlier snowmelt will cause reductions in summer streamflow. Most regional-scale assessments of climate change impacts on streamflow use downscaled temperature and precipitation projections from general circulation models (GCMs) coupled with large-scale hydrologic models. Here we develop and apply an analytical hydrogeologic framework for characterizing summer streamflow sensitivity to a change in the timing and magnitude of recharge in a spatially explicit fashion. In particular, we incorporate the role of deep groundwater, which large-scale hydrologic models generally fail to capture, into streamflow sensitivity assessments. We validate our analytical streamflow sensitivities against two empirical measures of sensitivity derived using historical observations of temperature, precipitation, and streamflow from 217 watersheds. In general, empirically and analytically derived streamflow sensitivity values correspond. Although the selected watersheds cover a range of hydrologic regimes (e.g., rain-dominated, mixture of rain and snow, and snow-dominated), sensitivity validation was primarily driven by the snow-dominated watersheds, which are subjected to a wider range of change in recharge timing and magnitude as a result of increased temperature. Overall, two patterns emerge from this analysis: first, areas with high streamflow sensitivity also have higher summer streamflows as compared to low-sensitivity areas. Second, the level of sensitivity and spatial extent of highly sensitive areas diminishes over time as the summer progresses. Results of this analysis point to a robust, practical, and scalable approach that can help assess risk at the landscape scale, complement the downscaling approach, be applied to any climate scenario of interest, and provide a framework to assist land and water managers in adapting to an uncertain and potentially challenging future.

  18. The Modularized Software Package ASKI - Full Waveform Inversion Based on Waveform Sensitivity Kernels Utilizing External Seismic Wave Propagation Codes

    NASA Astrophysics Data System (ADS)

    Schumacher, F.; Friederich, W.

    2015-12-01

    We present the modularized software package ASKI which is a flexible and extendable toolbox for seismic full waveform inversion (FWI) as well as sensitivity or resolution analysis operating on the sensitivity matrix. It utilizes established wave propagation codes for solving the forward problem and offers an alternative to the monolithic, unflexible and hard-to-modify codes that have typically been written for solving inverse problems. It is available under the GPL at www.rub.de/aski. The Gauss-Newton FWI method for 3D-heterogeneous elastic earth models is based on waveform sensitivity kernels and can be applied to inverse problems at various spatial scales in both Cartesian and spherical geometries. The kernels are derived in the frequency domain from Born scattering theory as the Fréchet derivatives of linearized full waveform data functionals, quantifying the influence of elastic earth model parameters on the particular waveform data values. As an important innovation, we keep two independent spatial descriptions of the earth model - one for solving the forward problem and one representing the inverted model updates. Thereby we account for the independent needs of spatial model resolution of forward and inverse problem, respectively. Due to pre-integration of the kernels over the (in general much coarser) inversion grid, storage requirements for the sensitivity kernels are dramatically reduced.ASKI can be flexibly extended to other forward codes by providing it with specific interface routines that contain knowledge about forward code-specific file formats and auxiliary information provided by the new forward code. In order to sustain flexibility, the ASKI tools must communicate via file output/input, thus large storage capacities need to be accessible in a convenient way. Storing the complete sensitivity matrix to file, however, permits the scientist full manual control over each step in a customized procedure of sensitivity/resolution analysis and full waveform inversion.

  19. The effect of free radical inhibitor on the sensitized radiation crosslinking and thermal processing stabilization of polyurethane shape memory polymers.

    PubMed

    Hearon, Keith; Smith, Sarah E; Maher, Cameron A; Wilson, Thomas S; Maitland, Duncan J

    2013-02-01

    The effects of free radical inhibitor on the electron beam crosslinking and thermal processing stabilization of novel radiation crosslinkable polyurethane shape memory polymers (SMPs) blended with acrylic radiation sensitizers have been determined. The SMPs in this study possess novel processing capabilities-that is, the ability to be melt processed into complex geometries as thermoplastics and crosslinked in a secondary step using electron beam irradiation. To increase susceptibility to radiation crosslinking, the radiation sensitizer pentaerythritol triacrylate (PETA) was solution blended with thermoplastic polyurethane SMPs made from 2-butene-1,4-diol and trimethylhexamethylene diisocyanate (TMHDI). Because thermoplastic melt processing methods such as injection molding are often carried out at elevated temperatures, sensitizer thermal instability is a major processing concern. Free radical inhibitor can be added to provide thermal stabilization; however, inhibitor can also undesirably inhibit radiation crosslinking. In this study, we quantified both the thermal stabilization and radiation crosslinking inhibition effects of the inhibitor 1,4-benzoquinone (BQ) on polyurethane SMPs blended with PETA. Sol/gel analysis of irradiated samples showed that the inhibitor had little to no inverse effects on gel fraction at concentrations of 0-10,000 ppm, and dynamic mechanical analysis showed only a slight negative correlation between BQ composition and rubbery modulus. The 1,4-benzoquinone was also highly effective in thermally stabilizing the acrylic sensitizers. The polymer blends could be heated to 150°C for up to five hours or to 125°C for up to 24 hours if stabilized with 10,000 ppm BQ and could also be heated to 125°C for up to 5 hours if stabilized with 1000 ppm BQ without sensitizer reaction occurring. We believe this study provides significant insight into methods for manipulation of the competing mechanisms of radiation crosslinking and thermal stabilization of radiation sensitizers, thereby facilitating further development of radiation crosslinkable thermoplastic SMPs.

  20. The effect of free radical inhibitor on the sensitized radiation crosslinking and thermal processing stabilization of polyurethane shape memory polymers

    NASA Astrophysics Data System (ADS)

    Hearon, Keith; Smith, Sarah E.; Maher, Cameron A.; Wilson, Thomas S.; Maitland, Duncan J.

    2013-02-01

    The effects of free radical inhibitor on the electron beam crosslinking and thermal processing stabilization of novel radiation crosslinkable polyurethane shape memory polymers (SMPs) blended with acrylic radiation sensitizers have been determined. The SMPs in this study possess novel processing capabilities—that is, the ability to be melt processed into complex geometries as thermoplastics and crosslinked in a secondary step using electron beam irradiation. To increase susceptibility to radiation crosslinking, the radiation sensitizer pentaerythritol triacrylate (PETA) was solution blended with thermoplastic polyurethane SMPs made from 2-butene-1,4-diol and trimethylhexamethylene diisocyanate (TMHDI). Because the thermoplastic melt processing methods such as injection molding are often carried out at elevated temperatures, sensitizer thermal instability is a major processing concern. Free radical inhibitor can be added to provide thermal stabilization; however, inhibitor can also undesirably inhibit radiation crosslinking. In this study, we quantified both the thermal stabilization and radiation crosslinking inhibition effects of the inhibitor 1,4-benzoquinone (BQ) on polyurethane SMPs blended with PETA. Sol/gel analysis of irradiated samples showed that the inhibitor had little to no inverse effects on gel fraction at concentrations of 0-10,000 ppm, and dynamic mechanical analysis showed only a slight negative correlation between BQ composition and rubbery modulus. The 1,4-benzoquinone was also highly effective in thermally stabilizing the acrylic sensitizers. The polymer blends could be heated to 150 °C for up to 5 h or to 125 °C for up to 24 h if stabilized with 10,000 ppm BQ and could also be heated to 125 °C for up to 5 h if stabilized with 1000 ppm BQ without sensitizer reaction occurring. We believe this study provides significant insight into methods for manipulation of the competing mechanisms of radiation crosslinking and thermal stabilization of radiation sensitizers, thereby facilitating further development of radiation crosslinkable thermoplastic SMPs.

  1. Statistical Performances of Resistive Active Power Splitter

    NASA Astrophysics Data System (ADS)

    Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul

    2016-03-01

    In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.

  2. Spatial delineation, fluid-lithology characterization, and petrophysical modeling of deepwater Gulf of Mexico reservoirs though joint AVA deterministic and stochastic inversion of three-dimensional partially-stacked seismic amplitude data and well logs

    NASA Astrophysics Data System (ADS)

    Contreras, Arturo Javier

    This dissertation describes a novel Amplitude-versus-Angle (AVA) inversion methodology to quantitatively integrate pre-stack seismic data, well logs, geologic data, and geostatistical information. Deterministic and stochastic inversion algorithms are used to characterize flow units of deepwater reservoirs located in the central Gulf of Mexico. A detailed fluid/lithology sensitivity analysis was conducted to assess the nature of AVA effects in the study area. Standard AVA analysis indicates that the shale/sand interface represented by the top of the hydrocarbon-bearing turbidite deposits generate typical Class III AVA responses. Layer-dependent Biot-Gassmann analysis shows significant sensitivity of the P-wave velocity and density to fluid substitution, indicating that presence of light saturating fluids clearly affects the elastic response of sands. Accordingly, AVA deterministic and stochastic inversions, which combine the advantages of AVA analysis with those of inversion, have provided quantitative information about the lateral continuity of the turbidite reservoirs based on the interpretation of inverted acoustic properties and fluid-sensitive modulus attributes (P-Impedance, S-Impedance, density, and LambdaRho, in the case of deterministic inversion; and P-velocity, S-velocity, density, and lithotype (sand-shale) distributions, in the case of stochastic inversion). The quantitative use of rock/fluid information through AVA seismic data, coupled with the implementation of co-simulation via lithotype-dependent multidimensional joint probability distributions of acoustic/petrophysical properties, provides accurate 3D models of petrophysical properties such as porosity, permeability, and water saturation. Pre-stack stochastic inversion provides more realistic and higher-resolution results than those obtained from analogous deterministic techniques. Furthermore, 3D petrophysical models can be more accurately co-simulated from AVA stochastic inversion results. By combining AVA sensitivity analysis techniques with pre-stack stochastic inversion, geologic data, and awareness of inversion pitfalls, it is possible to substantially reduce the risk in exploration and development of conventional and non-conventional reservoirs. From the final integration of deterministic and stochastic inversion results with depositional models and analogous examples, the M-series reservoirs have been interpreted as stacked terminal turbidite lobes within an overall fan complex (the Miocene MCAVLU Submarine Fan System); this interpretation is consistent with previous core data interpretations and regional stratigraphic/depositional studies.

  3. Using Image Attributes to Assure Accurate Particle Size and Count Using Nanoparticle Tracking Analysis.

    PubMed

    Defante, Adrian P; Vreeland, Wyatt N; Benkstein, Kurt D; Ripple, Dean C

    2018-05-01

    Nanoparticle tracking analysis (NTA) obtains particle size by analysis of particle diffusion through a time series of micrographs and particle count by a count of imaged particles. The number of observed particles imaged is controlled by the scattering cross-section of the particles and by camera settings such as sensitivity and shutter speed. Appropriate camera settings are defined as those that image, track, and analyze a sufficient number of particles for statistical repeatability. Here, we test if image attributes, features captured within the image itself, can provide measurable guidelines to assess the accuracy for particle size and count measurements using NTA. The results show that particle sizing is a robust process independent of image attributes for model systems. However, particle count is sensitive to camera settings. Using open-source software analysis, it was found that a median pixel area, 4 pixels 2 , results in a particle concentration within 20% of the expected value. The distribution of these illuminated pixel areas can also provide clues about the polydispersity of particle solutions prior to using a particle tracking analysis. Using the median pixel area serves as an operator-independent means to assess the quality of the NTA measurement for count. Published by Elsevier Inc.

  4. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  5. Increased sensitivity of prolonged P-wave during exercise stress test in detection of angiographically documented coronary artery disease.

    PubMed

    Wsol, Agnieszka; Wydra, Wioletta; Chmielewski, Marek; Swiatowiec, Andrzej; Kuch, Marek

    2017-01-01

    A retrospective study was designed to investigate P-wave duration changes in exercise stress test (EST) for the prediction of angiographically documented substantial coronary artery disease (CAD). We analyzed 265 cases of patients, who underwent EST and subsequently coronary angiography. Analysis of P-wave duration was performed in leads II, V5 at rest, and in the recovery period. The sensitivity and specificity for the isolated ST-segment depression were only 31% and 76%, respectively. The combination of ST-depression with other exercise-induced clinical and electrocardio-graphic abnormalities (chest pain, ventricular arrhythmia, hypotension, left bundle branch block) was characterized by 41% sensitivity and 69% specificity. The combination of abnormal recovery P-wave duration (≥ 120 ms) with ST-depression and other exercise-induced abnormalities had 83% sensitivity but only 20% specificity. Combined analysis of increased delta P-wave duration, ST-depression and other exercise-induced abnormalities had 69% sensitivity and 42% specificity. Sensitivity and specificity of the increase in delta P-wave duration for left CAD was 69% and 47%, respectively, and for 3-vessel CAD 70% and 50%, respectively. The presence of arterial hypertension negatively influenced the prog-nostic value of P-wave changes in the stress test. The results of the study show that an addition of P-wave duration changes assessment to ST-depression analysis and other exercise-induced abnormalities increase sensitivity of EST, especially for left CAD and 3-vessel coronary disease. We have also provided evidence for the negative influence of the presence of arterial hypertension on the predictive value of P-wave changes in the stress test. (Cardiol J 2017; 24, 2: 159-166).

  6. Characterization of Homopolymer and Polymer Blend Films by Phase Sensitive Acoustic Microscopy

    NASA Astrophysics Data System (ADS)

    Ngwa, Wilfred; Wannemacher, Reinhold; Grill, Wolfgang

    2003-03-01

    CHARACTERIZATION OF HOMOPOLYMER AND POLYMER BLEND FILMS BY PHASE SENSITIVE ACOUSTIC MICROSCOPY W Ngwa, R Wannemacher, W Grill Institute of Experimental Physics II, University of Leipzig, 04103 Leipzig, Germany Abstract We have used phase sensitive acoustic microscopy (PSAM) to study homopolymer thin films of polystyrene (PS) and poly (methyl methacrylate) (PMMA), as well as PS/PMMA blend films. We show from our results that PSAM can be used as a complementary and highly valuable technique for elucidating the three-dimensional (3D) morphology and micromechanical properties of thin films. Three-dimensional image acquisition with vector contrast provides the basis for: complex V(z) analysis (per image pixel), 3D image processing, height profiling, and subsurface image analysis of the polymer films. Results show good agreement with previous studies. In addition, important new information on the three dimensional structure and properties of polymer films is obtained. Homopolymer film structure analysis reveals (pseudo-) dewetting by retraction of droplets, resulting in a morphology that can serve as a starting point for the analysis of polymer blend thin films. The outcome of confocal laser scanning microscopy studies, performed on the same samples are correlated with the obtained results. Advantages and limitations of PSAM are discussed.

  7. [Assessment of the validity and utility of the Beijing questionnaire as atool to evaluate for obstructive sleep apnea hypopnea syndrome].

    PubMed

    Wang, X T; Gao, L M; Xu, W; Ding, X

    2016-10-20

    Objective: To test the Beijing questionnaire as a means of identifying patients with obstructive sleep apnea hypopnea syndrome(OSAHS). Method: The Beijing questionnaire is designed as an explorative tool consist of 11 questions for patients with obstructive sleep apnea hypopnea, and is targeted toward key symptoms include snoring, apneas, daytime sleepiness, hypertension and overweight. 1 336 female participants living in communities of age≥40 years and 198 male adult subjects visting clinics were given questionnaires. Finally, 59 female and 198 male subjects underwent sleep studies after factor analysis,reliability check,internal consistency study. The correlation analysis was performed between the scores from the Beijing questionnaire and the apnea-hypopnea index from inlaboratory polysomnography.Receiver operating characteristics were constructed to determine optimal sensitivity and specificity. Twenty-four male subjects were recorded in the sleep laberatory again after operative. Result: Factor analysis reduced 11 questions of scale to four common factors as we have designed: snoring,apneas,other symptoms,risk factors. Cronbach's α coefficient of scale reached 0.7.There were an acceptable level of testretest reliability(r=0.619, P <0.01).The apnea hypopnea indices were significantly correlated with their Beijing questionnaire scores( P <0.01).For wemen,an Beijing questionnaire scroe of 19.5 provided a sensitivity of 74.3% and a specificity of 62.5%.For men,an Beijing questionnaire scroe of 22.5 provided a sensitivity of 90.9% and a specificity of 54.5%. And the postoperative Beijing questionnaire scroes changed with the apnea hypopnea indices. Conclusion: This questionnaire has a good validity and reliability and appears to be valid and sensitive to clinical change. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  8. Evaluating language environment analysis system performance for Chinese: a pilot study in Shanghai.

    PubMed

    Gilkerson, Jill; Zhang, Yiwen; Xu, Dongxin; Richards, Jeffrey A; Xu, Xiaojuan; Jiang, Fan; Harnsberger, James; Topping, Keith

    2015-04-01

    The purpose of this study was to evaluate performance of the Language Environment Analysis (LENA) automated language-analysis system for the Chinese Shanghai dialect and Mandarin (SDM) languages. Volunteer parents of 22 children aged 3-23 months were recruited in Shanghai. Families provided daylong in-home audio recordings using LENA. A native speaker listened to 15 min of randomly selected audio samples per family to label speaker regions and provide Chinese character and SDM word counts for adult speakers. LENA segment labeling and counts were compared with rater-based values. LENA demonstrated good sensitivity in identifying adult and child; this sensitivity was comparable to that of American English validation samples. Precision was strong for adults but less so for children. LENA adult word count correlated strongly with both Chinese characters and SDM word counts. LENA conversational turn counts correlated similarly with rater-based counts after the exclusion of three unusual samples. Performance related to some degree to child age. LENA adult word count and conversational turn provided reasonably accurate estimates for SDM over the age range tested. Theoretical and practical considerations regarding LENA performance in non-English languages are discussed. Despite the pilot nature and other limitations of the study, results are promising for broader cross-linguistic applications.

  9. A special protection scheme utilizing trajectory sensitivity analysis in power transmission

    NASA Astrophysics Data System (ADS)

    Suriyamongkol, Dan

    In recent years, new measurement techniques have provided opportunities to improve the North American Power System observability, control and protection. This dissertation discusses the formulation and design of a special protection scheme based on a novel utilization of trajectory sensitivity techniques with inputs consisting of system state variables and parameters. Trajectory sensitivity analysis (TSA) has been used in previous publications as a method for power system security and stability assessment, and the mathematical formulation of TSA lends itself well to some of the time domain power system simulation techniques. Existing special protection schemes often have limited sets of goals and control actions. The proposed scheme aims to maintain stability while using as many control actions as possible. The approach here will use the TSA in a novel way by using the sensitivities of system state variables with respect to state parameter variations to determine the state parameter controls required to achieve the desired state variable movements. The initial application will operate based on the assumption that the modeled power system has full system observability, and practical considerations will be discussed.

  10. Highly sensitive image-derived indices of water-stressed plants using hyperspectral imaging in SWIR and histogram analysis

    PubMed Central

    Kim, David M.; Zhang, Hairong; Zhou, Haiying; Du, Tommy; Wu, Qian; Mockler, Todd C.; Berezin, Mikhail Y.

    2015-01-01

    The optical signature of leaves is an important monitoring and predictive parameter for a variety of biotic and abiotic stresses, including drought. Such signatures derived from spectroscopic measurements provide vegetation indices – a quantitative method for assessing plant health. However, the commonly used metrics suffer from low sensitivity. Relatively small changes in water content in moderately stressed plants demand high-contrast imaging to distinguish affected plants. We present a new approach in deriving sensitive indices using hyperspectral imaging in a short-wave infrared range from 800 nm to 1600 nm. Our method, based on high spectral resolution (1.56 nm) instrumentation and image processing algorithms (quantitative histogram analysis), enables us to distinguish a moderate water stress equivalent of 20% relative water content (RWC). The identified image-derived indices 15XX nm/14XX nm (i.e. 1529 nm/1416 nm) were superior to common vegetation indices, such as WBI, MSI, and NDWI, with significantly better sensitivity, enabling early diagnostics of plant health. PMID:26531782

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  12. The Impact of Quantitative Data Provided by a Multi-spectral Digital Skin Lesion Analysis Device on Dermatologists'Decisions to Biopsy Pigmented Lesions.

    PubMed

    Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S

    2017-09-01

    BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p <0.0001). Specificity improved from 52 percent to 79 percent ( p <0.0001). The positive predictive value increased from 61 percent to 81 percent ( p <0.01) when the quantitative data were provided. Negative predictive value also increased (68% vs. 91%, p<0.01), and overall biopsy accuracy was greater with multi-spectral digital skin lesion analysis (64% vs. 86%, p <0.001). Interrater reliability improved (intraclass correlation 0.466 before, 0.559 after). CONCLUSION: Incorporating the classifier score and probability data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.

  13. HF Propagation sensitivity study and system performance analysis with the Air Force Coverage Analysis Program (AFCAP)

    NASA Astrophysics Data System (ADS)

    Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.

    2017-12-01

    The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).

  14. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  15. Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points

    PubMed Central

    Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.

    2015-01-01

    Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758

  16. Pavement thickness design for local roads in Iowa : tech brief.

    DOT National Transportation Integrated Search

    2010-01-01

    The main objectives of this research are to: 1) identify the most critical design input parameters, 2) determine the minimum pavement thickness, and 3) develop new pavement design and sensitivity analysis (PD&SA) software which can provide the most a...

  17. Phosphorus Dynamics in Flooded Wetlands: I. Model Development and Sensitivity Analysis

    EPA Science Inventory

    Wetlands are unique ecosystems that are recognized as a transition between terrestrial and aquatic ecosystems. Wetlands provide habitats that support wildlife and biodiversity and act as natural purifiers for sediment and nutrient laden runoff from upland regions. They receive,...

  18. Bayesian sensitivity analysis of bifurcating nonlinear models

    NASA Astrophysics Data System (ADS)

    Becker, W.; Worden, K.; Rowson, J.

    2013-01-01

    Sensitivity analysis allows one to investigate how changes in input parameters to a system affect the output. When computational expense is a concern, metamodels such as Gaussian processes can offer considerable computational savings over Monte Carlo methods, albeit at the expense of introducing a data modelling problem. In particular, Gaussian processes assume a smooth, non-bifurcating response surface. This work highlights a recent extension to Gaussian processes which uses a decision tree to partition the input space into homogeneous regions, and then fits separate Gaussian processes to each region. In this way, bifurcations can be modelled at region boundaries and different regions can have different covariance properties. To test this method, both the treed and standard methods were applied to the bifurcating response of a Duffing oscillator and a bifurcating FE model of a heart valve. It was found that the treed Gaussian process provides a practical way of performing uncertainty and sensitivity analysis on large, potentially-bifurcating models, which cannot be dealt with by using a single GP, although an open problem remains how to manage bifurcation boundaries that are not parallel to coordinate axes.

  19. Sensitivity analysis of Repast computational ecology models with R/Repast.

    PubMed

    Prestes García, Antonio; Rodríguez-Patón, Alfonso

    2016-12-01

    Computational ecology is an emerging interdisciplinary discipline founded mainly on modeling and simulation methods for studying ecological systems. Among the existing modeling formalisms, the individual-based modeling is particularly well suited for capturing the complex temporal and spatial dynamics as well as the nonlinearities arising in ecosystems, communities, or populations due to individual variability. In addition, being a bottom-up approach, it is useful for providing new insights on the local mechanisms which are generating some observed global dynamics. Of course, no conclusions about model results could be taken seriously if they are based on a single model execution and they are not analyzed carefully. Therefore, a sound methodology should always be used for underpinning the interpretation of model results. The sensitivity analysis is a methodology for quantitatively assessing the effect of input uncertainty in the simulation output which should be incorporated compulsorily to every work based on in-silico experimental setup. In this article, we present R/Repast a GNU R package for running and analyzing Repast Simphony models accompanied by two worked examples on how to perform global sensitivity analysis and how to interpret the results.

  20. Processing postmortem specimens with C18-carboxypropylbetaine and analysis by PCR to develop an antemortem test for Mycobacterium avium infections in ducks.

    PubMed

    Thornton, C G; Cranfield, M R; MacLellan, K M; Brink, T L; Strandberg, J D; Carlin, E A; Torrelles, J B; Maslow, J N; Hasson, J L; Heyl, D M; Sarro, S J; Chatterjee, D; Passen, S

    1999-03-01

    Mycobacterium avium is the causative agent of the avian mycobacteriosis commonly known as avian tuberculosis (ATB). This infection causes disseminated disease, is difficult to diagnose, and is of serious concern because it causes significant mortality in birds. A new method was developed for processing specimens for an antemortem screening test for ATB. This novel method uses the zwitterionic detergent C18-carboxypropylbetaine (CB-18). Blood, bone marrow, bursa, and fecal specimens from 28 ducks and swabs of 20 lesions were processed with CB-18 for analysis by smear, culture, and polymerase chain reaction (PCR). Postmortem examination confirmed nine of these birds as either positive or highly suspect for disseminated disease. The sensitivities of smear, culture, and PCR, relative to postmortem analysis and independent of specimen type, were 44.4%, 88.9%, and 100%, respectively, and the specificities were 84.2%, 57.9%, and 15.8%, respectively. Reductions in specificity were due primarily to results among fecal specimens. However, these results were clustered among a subset of birds, suggesting that these tests actually identified birds in early stages of the disease. Restriction fragment length polymorphism mapping identified one strain of M. avium (serotype 1) that was isolated from lesions, bursa, bone marrow, blood, and feces of all but three of the culture-positive birds. In birds with confirmed disease, blood had the lowest sensitivity and the highest specificity by all diagnostic methods. Swabs of lesions provided the highest sensitivity by smear and culture (33.3% and 77.8%, respectively), whereas fecal specimens had the highest sensitivity by PCR (77.8%). The results of this study indicate that processing fecal specimens with CB-18, followed by PCR analysis, may provide a valuable first step for monitoring the presence of ATB in birds.

  1. Computing the sensitivity of drag and lift in flow past a circular cylinder: Time-stepping versus self-consistent analysis

    NASA Astrophysics Data System (ADS)

    Meliga, Philippe

    2017-07-01

    We provide in-depth scrutiny of two methods making use of adjoint-based gradients to compute the sensitivity of drag in the two-dimensional, periodic flow past a circular cylinder (Re≲189 ): first, the time-stepping analysis used in Meliga et al. [Phys. Fluids 26, 104101 (2014), 10.1063/1.4896941] that relies on classical Navier-Stokes modeling and determines the sensitivity to any generic control force from time-dependent adjoint equations marched backwards in time; and, second, a self-consistent approach building on the model of Mantič-Lugo et al. [Phys. Rev. Lett. 113, 084501 (2014), 10.1103/PhysRevLett.113.084501] to compute semilinear approximations of the sensitivity to the mean and fluctuating components of the force. Both approaches are applied to open-loop control by a small secondary cylinder and allow identifying the sensitive regions without knowledge of the controlled states. The theoretical predictions obtained by time-stepping analysis reproduce well the results obtained by direct numerical simulation of the two-cylinder system. So do the predictions obtained by self-consistent analysis, which corroborates the relevance of the approach as a guideline for efficient and systematic control design in the attempt to reduce drag, even though the Reynolds number is not close to the instability threshold and the oscillation amplitude is not small. This is because, unlike simpler approaches relying on linear stability analysis to predict the main features of the flow unsteadiness, the semilinear framework encompasses rigorously the effect of the control on the mean flow, as well as on the finite-amplitude fluctuation that feeds back nonlinearly onto the mean flow via the formation of Reynolds stresses. Such results are especially promising as the self-consistent approach determines the sensitivity from time-independent equations that can be solved iteratively, which makes it generally less computationally demanding. We ultimately discuss the extent to which relevant information can be gained from a hybrid modeling computing self-consistent sensitivities from the postprocessing of DNS data. Application to alternative control objectives such as increasing the lift and alleviating the fluctuating drag and lift is also discussed.

  2. New approaches to the analysis of complex samples using fluorescence lifetime techniques and organized media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertz, P.R.

    Fluorescence spectroscopy is a highly sensitive and selective tool for the analysis of complex systems. In order to investigate the efficacy of several steady state and dynamic techniques for the analysis of complex systems, this work focuses on two types of complex, multicomponent samples: petrolatums and coal liquids. It is shown in these studies dynamic, fluorescence lifetime-based measurements provide enhanced discrimination between complex petrolatum samples. Additionally, improved quantitative analysis of multicomponent systems is demonstrated via incorporation of organized media in coal liquid samples. This research provides the first systematic studies of (1) multifrequency phase-resolved fluorescence spectroscopy for dynamic fluorescence spectralmore » fingerprinting of complex samples, and (2) the incorporation of bile salt micellar media to improve accuracy and sensitivity for characterization of complex systems. In the petroleum studies, phase-resolved fluorescence spectroscopy is used to combine spectral and lifetime information through the measurement of phase-resolved fluorescence intensity. The intensity is collected as a function of excitation and emission wavelengths, angular modulation frequency, and detector phase angle. This multidimensional information enhances the ability to distinguish between complex samples with similar spectral characteristics. Examination of the eigenvalues and eigenvectors from factor analysis of phase-resolved and steady state excitation-emission matrices, using chemometric methods of data analysis, confirms that phase-resolved fluorescence techniques offer improved discrimination between complex samples as compared with conventional steady state methods.« less

  3. Quantitative pre-clinical screening of therapeutics for joint diseases using contrast enhanced micro-computed tomography.

    PubMed

    Willett, N J; Thote, T; Hart, M; Moran, S; Guldberg, R E; Kamath, R V

    2016-09-01

    The development of effective therapies for cartilage protection has been limited by a lack of efficient quantitative cartilage imaging modalities in pre-clinical in vivo models. Our objectives were two-fold: first, to validate a new contrast-enhanced 3D imaging analysis technique, equilibrium partitioning of an ionic contrast agent-micro computed tomography (EPIC-μCT), in a rat medial meniscal transection (MMT) osteoarthritis (OA) model; and second, to quantitatively assess the sensitivity of EPIC-μCT to detect the effects of matrix metalloproteinase inhibitor (MMPi) therapy on cartilage degeneration. Rats underwent MMT surgery and tissues were harvested at 1, 2, and 3 weeks post-surgery or rats received an MMPi or vehicle treatment and tissues harvested 3 weeks post-surgery. Parameters of disease progression were evaluated using histopathology and EPIC-μCT. Correlations and power analyses were performed to compare the techniques. EPIC-μCT was shown to provide simultaneous 3D quantification of multiple parameters, including cartilage degeneration and osteophyte formation. In MMT animals treated with MMPi, OA progression was attenuated, as measured by 3D parameters such as lesion volume and osteophyte size. A post-hoc power analysis showed that 3D parameters for EPIC-μCT were more sensitive than 2D parameters requiring fewer animals to detect a therapeutic effect of MMPi. 2D parameters were comparable between EPIC-μCT and histopathology. This study demonstrated that EPIC-μCT has high sensitivity to provide 3D structural and compositional measurements of cartilage and bone in the joint. EPIC-μCT can be used in combination with histology to provide a comprehensive analysis to screen new potential therapies. Copyright © 2016 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  4. IUS solid rocket motor contamination prediction methods

    NASA Technical Reports Server (NTRS)

    Mullen, C. R.; Kearnes, J. H.

    1980-01-01

    A series of computer codes were developed to predict solid rocket motor produced contamination to spacecraft sensitive surfaces. Subscale and flight test data have confirmed some of the analytical results. Application of the analysis tools to a typical spacecraft has provided early identification of potential spacecraft contamination problems and provided insight into their solution; e.g., flight plan modifications, plume or outgassing shields and/or contamination covers.

  5. Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camins, I.; Shinn, J.H.

    We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive amore » measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab.« less

  6. Sol-gel titania-coated needles for solid phase dynamic extraction-GC/MS analysis of desomorphine and desocodeine.

    PubMed

    Su, Chi-Ju; Srimurugan, Sankarewaran; Chen, Chinpiao; Shu, Hun-Chi

    2011-01-01

    Novel sol-gel titania film coated needles for solid-phase dynamic extraction (SPDE)-GC/MS analysis of desomorphine and desocodeine are described. The high thermal stability of titania film permits efficient extraction and analysis of poorly volatile opiate drugs. The influences of sol-gel reaction time, coating layer, extraction and desorption time and temperature on the SPDE needle performance were investigated. The deuterium labeled internal standard was introduced either during the extraction of analyte or directly injected to GC after the extraction process. The latter method was shown to be more sensitive for the analysis of water and urine samples containing opiate drugs. The proposed conditions provided a wide linear range (from 5-5000 ppb), and satisfactory linearity, with R(2) values from 0.9958 to 0.9999, and prominent sensitivity, LOQs (1.0-5.0 ng/g). The sol-gel titania film coated needle with SPDE-GC/MS will be a promising technique for desomorphine and desocodeine analysis in urine.

  7. Cost-effectiveness of point-of-care testing for dehydration in the pediatric ED.

    PubMed

    Whitney, Rachel E; Santucci, Karen; Hsiao, Allen; Chen, Lei

    2016-08-01

    Acute gastroenteritis (AGE) and subsequent dehydration account for a large proportion of pediatric emergency department (PED) visits. Point-of-care (POC) testing has been used in conjunction with clinical assessment to determine the degree of dehydration. Despite the wide acceptance of POC testing, little formal cost-effective analysis of POC testing in the PED exists. We aim to examine the cost-effectiveness of using POC electrolyte testing vs traditional serum chemistry testing in the PED for children with AGE. This was a cost-effective analysis using data from a randomized control trial of children with AGE. A decision analysis model was constructed to calculate cost-savings from the point of view of the payer and the provider. We used parameters obtained from the trial, including cost of testing, admission rates, cost of admission, and length of stay. Sensitivity analyses were performed to evaluate the stability of our model. Using the data set of 225 subjects, POC testing results in a cost savings of $303.30 per patient compared with traditional serum testing from the point of the view of the payer. From the point-of-view of the provider, POC testing results in consistent mean savings of $36.32 ($8.29-$64.35) per patient. Sensitivity analyses demonstrated the stability of the model and consistent savings. This decision analysis provides evidence that POC testing in children with gastroenteritis-related moderate dehydration results in significant cost savings from the points of view of payers and providers compared to traditional serum chemistry testing. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  9. Costs and Effects of a Telephonic Diabetes Self-Management Support Intervention Using Health Educators

    PubMed Central

    Schechter, Clyde B.; Walker, Elizabeth A.; Ortega, Felix M.; Chamany, Shadi; Silver, Lynn D.

    2015-01-01

    Background Self-management is crucial to successful glycemic control in patients with diabetes, yet it requires patients to initiate and sustain complicated behavioral changes. Support programs can improve glycemic control, but may be expensive to implement. We report here an analysis of the costs of a successful telephone-based self-management support program delivered by lay health educators utilizing a municipal health department A1c registry, and relate them to near-term effectiveness. Methods Costs of implementation were assessed by micro-costing of all resources used. Per-capita costs and cost-effectiveness ratios from the perspective of the service provider are estimated for net A1c reduction, and percentages of patients achieving A1c reductions of 0.5 and 1.0 percentage points. Oneway sensitivity analyses of key cost elements, and a Monte Carlo sensitivity analysis are reported. Results The telephone intervention was provided to 443 people at a net cost of $187.61 each. Each percentage point of net A1c reduction was achieved at a cost of $464.41. Labor costs were the largest component of costs, and cost-effectiveness was most sensitive to the wages paid to the health educators. Conclusions Effective telephone-based self-management support for people in poor diabetes control can be delivered by health educators at moderate cost relative to the gains achieved. The costs of doing so are most sensitive to the prevailing wage for the health educators. PMID:26750743

  10. Costs and effects of a telephonic diabetes self-management support intervention using health educators.

    PubMed

    Schechter, Clyde B; Walker, Elizabeth A; Ortega, Felix M; Chamany, Shadi; Silver, Lynn D

    2016-03-01

    Self-management is crucial to successful glycemic control in patients with diabetes, yet it requires patients to initiate and sustain complicated behavioral changes. Support programs can improve glycemic control, but may be expensive to implement. We report here an analysis of the costs of a successful telephone-based self-management support program delivered by lay health educators utilizing a municipal health department A1c registry, and relate them to near-term effectiveness. Costs of implementation were assessed by micro-costing of all resources used. Per-capita costs and cost-effectiveness ratios from the perspective of the service provider are estimated for net A1c reduction, and percentages of patients achieving A1c reductions of 0.5 and 1.0 percentage points. One-way sensitivity analyses of key cost elements, and a Monte Carlo sensitivity analysis are reported. The telephone intervention was provided to 443 people at a net cost of $187.61 each. Each percentage point of net A1c reduction was achieved at a cost of $464.41. Labor costs were the largest component of costs, and cost-effectiveness was most sensitive to the wages paid to the health educators. Effective telephone-based self-management support for people in poor diabetes control can be delivered by health educators at moderate cost relative to the gains achieved. The costs of doing so are most sensitive to the prevailing wage for the health educators. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. OPTIMA: sensitive and accurate whole-genome alignment of error-prone genomic maps by combinatorial indexing and technology-agnostic statistical analysis.

    PubMed

    Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan

    2016-01-01

    Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.

  12. Comprehensive viral enrichment enables sensitive respiratory virus genomic identification and analysis by next generation sequencing.

    PubMed

    O'Flaherty, Brigid M; Li, Yan; Tao, Ying; Paden, Clinton R; Queen, Krista; Zhang, Jing; Dinwiddie, Darrell L; Gross, Stephen M; Schroth, Gary P; Tong, Suxiang

    2018-06-01

    Next generation sequencing (NGS) technologies have revolutionized the genomics field and are becoming more commonplace for identification of human infectious diseases. However, due to the low abundance of viral nucleic acids (NAs) in relation to host, viral identification using direct NGS technologies often lacks sufficient sensitivity. Here, we describe an approach based on two complementary enrichment strategies that significantly improves the sensitivity of NGS-based virus identification. To start, we developed two sets of DNA probes to enrich virus NAs associated with respiratory diseases. The first set of probes spans the genomes, allowing for identification of known viruses and full genome sequencing, while the second set targets regions conserved among viral families or genera, providing the ability to detect both known and potentially novel members of those virus groups. Efficiency of enrichment was assessed by NGS testing reference virus and clinical samples with known infection. We show significant improvement in viral identification using enriched NGS compared to unenriched NGS. Without enrichment, we observed an average of 0.3% targeted viral reads per sample. However, after enrichment, 50%-99% of the reads per sample were the targeted viral reads for both the reference isolates and clinical specimens using both probe sets. Importantly, dramatic improvements on genome coverage were also observed following virus-specific probe enrichment. The methods described here provide improved sensitivity for virus identification by NGS, allowing for a more comprehensive analysis of disease etiology. © 2018 O'Flaherty et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  14. Basin-scale geothermal model calibration: experience from the Perth Basin, Australia

    NASA Astrophysics Data System (ADS)

    Wellmann, Florian; Reid, Lynn

    2014-05-01

    The calibration of large-scale geothermal models for entire sedimentary basins is challenging as direct measurements of rock properties and subsurface temperatures are commonly scarce and the basal boundary conditions poorly constrained. Instead of the often applied "trial-and-error" manual model calibration, we examine here if we can gain additional insight into parameter sensitivities and model uncertainty with a model analysis and calibration study. Our geothermal model is based on a high-resolution full 3-D geological model, covering an area of more than 100,000 square kilometers and extending to a depth of 55 kilometers. The model contains all major faults (>80 ) and geological units (13) for the entire basin. This geological model is discretised into a rectilinear mesh with a lateral resolution of 500 x 500 m, and a variable resolution at depth. The highest resolution of 25 m is applied to a depth range of 1000-3000 m where most temperature measurements are available. The entire discretised model consists of approximately 50 million cells. The top thermal boundary condition is derived from surface temperature measurements on land and ocean floor. The base of the model extents below the Moho, and we apply the heat flux over the Moho as a basal heat flux boundary condition. Rock properties (thermal conductivity, porosity, and heat production) have been compiled from several existing data sets. The conductive geothermal forward simulation is performed with SHEMAT, and we then use the stand-alone capabilities of iTOUGH2 for sensitivity analysis and model calibration. Simulated temperatures are compared to 130 quality weighted bottom hole temperature measurements. The sensitivity analysis provided a clear insight into the most sensitive parameters and parameter correlations. This proved to be of value as strong correlations, for example between basal heat flux and heat production in deep geological units, can significantly influence the model calibration procedure. The calibration resulted in a better determination of subsurface temperatures, and, in addition, provided an insight into model quality. Furthermore, a detailed analysis of the measurements used for calibration highlighted potential outliers, and limitations with the model assumptions. Extending the previously existing large-scale geothermal simulation with iTOUGH2 provided us with a valuable insight into the sensitive parameters and data in the model, which would clearly not be possible with a simple trial-and-error calibration method. Using the gained knowledge, future work will include more detailed studies on the influence of advection and convection.

  15. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  16. ASKI: A modular toolbox for scattering-integral-based seismic full waveform inversion and sensitivity analysis utilizing external forward codes

    NASA Astrophysics Data System (ADS)

    Schumacher, Florian; Friederich, Wolfgang

    Due to increasing computational resources, the development of new numerically demanding methods and software for imaging Earth's interior remains of high interest in Earth sciences. Here, we give a description from a user's and programmer's perspective of the highly modular, flexible and extendable software package ASKI-Analysis of Sensitivity and Kernel Inversion-recently developed for iterative scattering-integral-based seismic full waveform inversion. In ASKI, the three fundamental steps of solving the seismic forward problem, computing waveform sensitivity kernels and deriving a model update are solved by independent software programs that interact via file output/input only. Furthermore, the spatial discretizations of the model space used for solving the seismic forward problem and for deriving model updates, respectively, are kept completely independent. For this reason, ASKI does not contain a specific forward solver but instead provides a general interface to established community wave propagation codes. Moreover, the third fundamental step of deriving a model update can be repeated at relatively low costs applying different kinds of model regularization or re-selecting/weighting the inverted dataset without need to re-solve the forward problem or re-compute the kernels. Additionally, ASKI offers the user sensitivity and resolution analysis tools based on the full sensitivity matrix and allows to compose customized workflows in a consistent computational environment. ASKI is written in modern Fortran and Python, it is well documented and freely available under terms of the GNU General Public License (http://www.rub.de/aski).

  17. Optimizing the interpretation of CT for appendicitis: modeling health utilities for clinical practice.

    PubMed

    Blackmore, C Craig; Terasawa, Teruhiko

    2006-02-01

    Error in radiology can be reduced by standardizing the interpretation of imaging studies to the optimum sensitivity and specificity. In this report, the authors demonstrate how the optimal interpretation of appendiceal computed tomography (CT) can be determined and how it varies in different clinical scenarios. Utility analysis and receiver operating characteristic (ROC) curve modeling were used to determine the trade-off between false-positive and false-negative test results to determine the optimal operating point on the ROC curve for the interpretation of appendicitis CT. Modeling was based on a previous meta-analysis for the accuracy of CT and on literature estimates of the utilities of various health states. The posttest probability of appendicitis was derived using Bayes's theorem. At a low prevalence of disease (screening), appendicitis CT should be interpreted at high specificity (97.7%), even at the expense of lower sensitivity (75%). Conversely, at a high probability of disease, high sensitivity (97.4%) is preferred (specificity 77.8%). When the clinical diagnosis of appendicitis is equivocal, CT interpretation should emphasize both sensitivity and specificity (sensitivity 92.3%, specificity 91.5%). Radiologists can potentially decrease medical error and improve patient health by varying the interpretation of appendiceal CT on the basis of the clinical probability of appendicitis. This report is an example of how utility analysis can be used to guide radiologists in the interpretation of imaging studies and provide guidance on appropriate targets for the standardization of interpretation.

  18. Wavenumber-frequency Spectra of Pressure Fluctuations Measured via Fast Response Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Panda, J.; Roozeboom, N. H.; Ross, J. C.

    2016-01-01

    The recent advancement in fast-response Pressure-Sensitive Paint (PSP) allows time-resolved measurements of unsteady pressure fluctuations from a dense grid of spatial points on a wind tunnel model. This capability allows for direct calculations of the wavenumber-frequency (k-?) spectrum of pressure fluctuations. Such data, useful for the vibro-acoustics analysis of aerospace vehicles, are difficult to obtain otherwise. For the present work, time histories of pressure fluctuations on a flat plate subjected to vortex shedding from a rectangular bluff-body were measured using PSP. The light intensity levels in the photographic images were then converted to instantaneous pressure histories by applying calibration constants, which were calculated from a few dynamic pressure sensors placed at selective points on the plate. Fourier transform of the time-histories from a large number of spatial points provided k-? spectra for pressure fluctuations. The data provides first glimpse into the possibility of creating detailed forcing functions for vibro-acoustics analysis of aerospace vehicles, albeit for a limited frequency range.

  19. A new cross-correlation algorithm for the analysis of "in vitro" neuronal network activity aimed at pharmacological studies.

    PubMed

    Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A

    2011-08-15

    Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Study on turbulence characteristics and sensitivity of quadrant analysis to threshold level in Lake Taihu.

    PubMed

    Weng, Shenglin; Li, Yiping; Wei, Jin; Du, Wei; Gao, Xiaomeng; Wang, Wencai; Wang, Jianwei; Acharya, Kumud; Luo, Liancong

    2018-05-01

    The identification of coherent structures is very important in investigating the sediment transport mechanism and controlling the eutrophication in shallow lakes. This study analyzed the turbulence characteristics and the sensitivity of quadrant analysis to threshold level. Simultaneous in situ measurements of velocities and suspended sediment concentration (SSC) were conducted in Lake Taihu with acoustic Doppler velocimeter (ADV) and optical backscatter sensor (OBS) instruments. The results show that the increase in hole size makes the difference between dominant and non-dominant events more distinct. Wind velocity determines the frequency of occurrence of sweep and ejection events, which provide dominant contributions to the Reynolds stress. The increase of wind velocity enlarges the magnitude of coherent events but has little impact on the events frequency with the same hole size. The events occurring within short periods provide large contributions to the momentum flux. Transportation and diffusion of sediment are in control of the intermittent coherent events to a large extent.

  1. Diagnostic Accuracy of Memory Measures in Alzheimer’s Dementia and Mild Cognitive Impairment: a Systematic Review and Meta-Analysis

    PubMed Central

    Weissberger, Gali H.; Strong, Jessica V.; Stefanidis, Kayla B.; Summers, Mathew J.; Bondi, Mark W.; Stricker, Nikki H.

    2018-01-01

    With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer’s dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer’s disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers. PMID:28940127

  2. Computational Analysis of Epidermal Growth Factor Receptor Mutations Predicts Differential Drug Sensitivity Profiles toward Kinase Inhibitors.

    PubMed

    Akula, Sravani; Kamasani, Swapna; Sivan, Sree Kanth; Manga, Vijjulatha; Vudem, Dashavantha Reddy; Kancha, Rama Krishna

    2018-05-01

    A significant proportion of patients with lung cancer carry mutations in the EGFR kinase domain. The presence of a deletion mutation in exon 19 or L858R point mutation in the EGFR kinase domain has been shown to cause enhanced efficacy of inhibitor treatment in patients with NSCLC. Several less frequent (uncommon) mutations in the EGFR kinase domain with potential implications in treatment response have also been reported. The role of a limited number of uncommon mutations in drug sensitivity was experimentally verified. However, a huge number of these mutations remain uncharacterized for inhibitor sensitivity or resistance. A large-scale computational analysis of clinically reported 298 point mutants of EGFR kinase domain has been performed, and drug sensitivity profiles for each mutant toward seven kinase inhibitors has been determined by molecular docking. In addition, the relative inhibitor binding affinity toward each drug as compared with that of adenosine triphosphate was calculated for each mutant. The inhibitor sensitivity profiles predicted in this study for a set of previously characterized mutants correlated well with the published clinical, experimental, and computational data. Both the single and compound mutations displayed differential inhibitor sensitivity toward first- and next-generation kinase inhibitors. The present study provides predicted drug sensitivity profiles for a large panel of uncommon EGFR mutations toward multiple inhibitors, which may help clinicians in deciding mutant-specific treatment strategies. Copyright © 2018 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.

  3. Sensitivity Analysis as a Tool to assess Energy-Water Nexus in India

    NASA Astrophysics Data System (ADS)

    Priyanka, P.; Banerjee, R.

    2017-12-01

    Rapid urbanization, population growth and related structural changes with-in the economy of a developing country act as a stressor on energy and water demand, which forms a well-established energy-water nexus. Energy-water nexus is thoroughly studied at various spatial scales viz. city level, river basin level and national level- to guide different stakeholders for sustainable management of energy and water. However, temporal dimensions of energy-water nexus at national level have not been thoroughly investigated because of unavailability of relevant time-series data. In this study we investigated energy-water nexus at national level using environmentally-extended input-output tables for Indian economy (2004-2013) as provided by EORA database. Perturbation based sensitivity analysis is proposed to highlight the critical nodes of interactions among economic sectors which is further linked to detect the synergistic effects of energy and water consumption. Technology changes (interpreted as change in value of nodes) results in modification of interactions among economic sectors and synergy is affected through direct as well as indirect effects. Indirect effects are not easily understood through preliminary examination of data, hence sensitivity analysis within an input-output framework is important to understand the indirect effects. Furthermore, time series data helps in developing the understanding on dynamics of synergistic effects. We identified the key sectors and technology changes for Indian economy which will provide the better decision support for policy makers about sustainable use of energy-water resources in India.

  4. A comparison of solute-transport solution techniques and their effect on sensitivity analysis and inverse modeling results

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2001-01-01

    Five common numerical techniques for solving the advection-dispersion equation (finite difference, predictor corrector, total variation diminishing, method of characteristics, and modified method of characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using discrete, randomly distributed, homogeneous blocks of five sand types. This experimental model provides an opportunity to compare the solution techniques: the heterogeneous hydraulic-conductivity distribution of known structure can be accurately represented by a numerical model, and detailed measurements can be compared with simulated concentrations and total flow through the tank. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation given the different methods of simulating solute transport. The breakthrough curves show that simulated peak concentrations, even at very fine grid spacings, varied between the techniques because of different amounts of numerical dispersion. Sensitivity-analysis results revealed: (1) a high correlation between hydraulic conductivity and porosity given the concentration and flow observations used, so that both could not be estimated; and (2) that the breakthrough curve data did not provide enough information to estimate individual values of dispersivity for the five sands. This study demonstrates that the choice of assigned dispersivity and the amount of numerical dispersion present in the solution technique influence estimated hydraulic conductivity values to a surprising degree.

  5. Sensitivity of Podosphaera aphanis isolates to DMI fungicides: distribution and reduced cross-sensitivity.

    PubMed

    Sombardier, Audrey; Dufour, Marie-Cécile; Blancard, Dominique; Corio-Costet, Marie-France

    2010-01-01

    Management of strawberry powdery mildew, Podopshaera aphanis (Wallr.), requires numerous fungicide treatments. Limiting epidemics is heavily dependent on sterol demethylation inhibitors (DMIs) such as myclobutanil or penconazole. Recently, a noticeable reduction in the efficacy of these triazole fungicides was reported by strawberry growers in France. The goal of this study was to investigate the state of DMI sensitivity of French P. aphanis and provide tools for improved pest management. Using leaf disc sporulation assays, sensitivity to myclobutanil and penconazole of 23 isolates of P. aphanis was monitored. Myclobutanil EC(50) ranged from less than 0.1 to 14.67 mg L(-1) and for penconazole from 0.04 to 4.2 mg L(-1). A cross-analysis and a Venn diagram showed that there was reduced sensitivity and a positive correlation between the less sensitive myclobutanil and penconazole isolates; 73.9% of isolates were less sensitive to a DMI and 47.8% exhibited less sensitivity to both fungicides. The results show that sensitivity to myclobutanil and, to a lesser extent, penconazole has become less efficient in strawberry powdery mildew in France. Therefore, urgent action is required in order to document its appearance and optimise methods of control.

  6. Development of MRM-based assays for the absolute quantitation of plasma proteins.

    PubMed

    Kuzyk, Michael A; Parker, Carol E; Domanski, Dominik; Borchers, Christoph H

    2013-01-01

    Multiple reaction monitoring (MRM), sometimes called selected reaction monitoring (SRM), is a directed tandem mass spectrometric technique performed on to triple quadrupole mass spectrometers. MRM assays can be used to sensitively and specifically quantify proteins based on peptides that are specific to the target protein. Stable-isotope-labeled standard peptide analogues (SIS peptides) of target peptides are added to enzymatic digests of samples, and quantified along with the native peptides during MRM analysis. Monitoring of the intact peptide and a collision-induced fragment of this peptide (an ion pair) can be used to provide information on the absolute peptide concentration of the peptide in the sample and, by inference, the concentration of the intact protein. This technique provides high specificity by selecting for biophysical parameters that are unique to the target peptides: (1) the molecular weight of the peptide, (2) the generation of a specific fragment from the peptide, and (3) the HPLC retention time during LC/MRM-MS analysis. MRM is a highly sensitive technique that has been shown to be capable of detecting attomole levels of target peptides in complex samples such as tryptic digests of human plasma. This chapter provides a detailed description of how to develop and use an MRM protein assay. It includes sections on the critical "first step" of selecting the target peptides, as well as optimization of MRM acquisition parameters for maximum sensitivity of the ion pairs that will be used in the final method, and characterization of the final MRM assay.

  7. Using Real-time Event Tracking Sensitivity Analysis to Overcome Sensor Measurement Uncertainties of Geo-Information Management in Drilling Disasters

    NASA Astrophysics Data System (ADS)

    Tavakoli, S.; Poslad, S.; Fruhwirth, R.; Winter, M.

    2012-04-01

    This paper introduces an application of a novel EventTracker platform for instantaneous Sensitivity Analysis (SA) of large scale real-time geo-information. Earth disaster management systems demand high quality information to aid a quick and timely response to their evolving environments. The idea behind the proposed EventTracker platform is the assumption that modern information management systems are able to capture data in real-time and have the technological flexibility to adjust their services to work with specific sources of data/information. However, to assure this adaptation in real time, the online data should be collected, interpreted, and translated into corrective actions in a concise and timely manner. This can hardly be handled by existing sensitivity analysis methods because they rely on historical data and lazy processing algorithms. In event-driven systems, the effect of system inputs on its state is of value, as events could cause this state to change. This 'event triggering' situation underpins the logic of the proposed approach. Event tracking sensitivity analysis method describes the system variables and states as a collection of events. The higher the occurrence of an input variable during the trigger of event, the greater its potential impact will be on the final analysis of the system state. Experiments were designed to compare the proposed event tracking sensitivity analysis with existing Entropy-based sensitivity analysis methods. The results have shown a 10% improvement in a computational efficiency with no compromise for accuracy. It has also shown that the computational time to perform the sensitivity analysis is 0.5% of the time required compared to using the Entropy-based method. The proposed method has been applied to real world data in the context of preventing emerging crises at drilling rigs. One of the major purposes of such rigs is to drill boreholes to explore oil or gas reservoirs with the final scope of recovering the content of such reservoirs; both in onshore regions as well as in offshore regions. Drilling a well is always guided by technical, economic and security constraints to prevent crew, equipment and environment from injury, damage and pollution. Although risk assessment and local practice provides a high degree of security, uncertainty is given by the behaviour of the formation which may cause crucial situations at the rig. To overcome such uncertainties real-time sensor measurements form a base to predict and thus prevent such crises, the proposed method supports the identification of the data necessary for that.

  8. Comparative transcriptome profiling of a thermal resistant vs. sensitive silkworm strain in response to high temperature under stressful humidity condition

    PubMed Central

    Xiao, Jinshu; Wang, La; Liu, Taihang; Wu, Yunfei; Dong, Feifan; Jiang, Yaming; Pan, Minhui; Zhang, Youhong; Lu, Cheng

    2017-01-01

    Thermotolerance is important particularly for poikilotherms such as insects. Understanding the mechanisms by which insects respond to high temperatures can provide insights into their adaptation to the environment. Therefore, in this study, we performed a transcriptome analysis of two silkworm strains with significantly different resistance to heat as well as humidity; the thermo-resistant strain 7532 and the thermos-sensitive strain Knobbed. We identified in total 4,944 differentially expressed genes (DEGs) using RNA-Seq. Among these, 4,390 were annotated and 554 were novel. Gene Ontology (GO) analysis of 747 DEGs identified between RT_48h (Resistant strain with high-temperature Treatment for 48 hours) and ST_48h (Sensitive strain with high-temperature Treatment for 48 hours) showed significant enrichment of 12 GO terms including metabolic process, extracellular region and serine-type peptidase activity. Moreover, we discovered 12 DEGs that may contribute to the heat-humidity stress response in the silkworm. Our data clearly showed that 48h post-exposure may be a critical time point for silkworm to respond to high temperature and humidity. These results provide insights into the genes and biological processes involved in high temperature and humidity tolerance in the silkworm, and advance our understanding of thermal tolerance in insects. PMID:28542312

  9. Multiplexed transcriptome analysis to detect ALK, ROS1 and RET rearrangements in lung cancer

    PubMed Central

    Rogers, Toni-Maree; Arnau, Gisela Mir; Ryland, Georgina L.; Huang, Stephen; Lira, Maruja E.; Emmanuel, Yvette; Perez, Omar D.; Irwin, Darryl; Fellowes, Andrew P.; Wong, Stephen Q.; Fox, Stephen B.

    2017-01-01

    ALK, ROS1 and RET gene fusions are important predictive biomarkers for tyrosine kinase inhibitors in lung cancer. Currently, the gold standard method for gene fusion detection is Fluorescence In Situ Hybridization (FISH) and while highly sensitive and specific, it is also labour intensive, subjective in analysis, and unable to screen a large numbers of gene fusions. Recent developments in high-throughput transcriptome-based methods may provide a suitable alternative to FISH as they are compatible with multiplexing and diagnostic workflows. However, the concordance between these different methods compared with FISH has not been evaluated. In this study we compared the results from three transcriptome-based platforms (Nanostring Elements, Agena LungFusion panel and ThermoFisher NGS fusion panel) to those obtained from ALK, ROS1 and RET FISH on 51 clinical specimens. Overall agreement of results ranged from 86–96% depending on the platform used. While all platforms were highly sensitive, both the Agena panel and Thermo Fisher NGS fusion panel reported minor fusions that were not detectable by FISH. Our proof–of–principle study illustrates that transcriptome-based analyses are sensitive and robust methods for detecting actionable gene fusions in lung cancer and could provide a robust alternative to FISH testing in the diagnostic setting. PMID:28181564

  10. Comparative transcriptome profiling of a thermal resistant vs. sensitive silkworm strain in response to high temperature under stressful humidity condition.

    PubMed

    Xiao, Wenfu; Chen, Peng; Xiao, Jinshu; Wang, La; Liu, Taihang; Wu, Yunfei; Dong, Feifan; Jiang, Yaming; Pan, Minhui; Zhang, Youhong; Lu, Cheng

    2017-01-01

    Thermotolerance is important particularly for poikilotherms such as insects. Understanding the mechanisms by which insects respond to high temperatures can provide insights into their adaptation to the environment. Therefore, in this study, we performed a transcriptome analysis of two silkworm strains with significantly different resistance to heat as well as humidity; the thermo-resistant strain 7532 and the thermos-sensitive strain Knobbed. We identified in total 4,944 differentially expressed genes (DEGs) using RNA-Seq. Among these, 4,390 were annotated and 554 were novel. Gene Ontology (GO) analysis of 747 DEGs identified between RT_48h (Resistant strain with high-temperature Treatment for 48 hours) and ST_48h (Sensitive strain with high-temperature Treatment for 48 hours) showed significant enrichment of 12 GO terms including metabolic process, extracellular region and serine-type peptidase activity. Moreover, we discovered 12 DEGs that may contribute to the heat-humidity stress response in the silkworm. Our data clearly showed that 48h post-exposure may be a critical time point for silkworm to respond to high temperature and humidity. These results provide insights into the genes and biological processes involved in high temperature and humidity tolerance in the silkworm, and advance our understanding of thermal tolerance in insects.

  11. Addressing issues associated with evaluating prediction models for survival endpoints based on the concordance statistic.

    PubMed

    Wang, Ming; Long, Qi

    2016-09-01

    Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.

  12. Marine electrical resistivity imaging of submarine groundwater discharge: Sensitivity analysis and application in Waquoit Bay, Massachusetts, USA

    USGS Publications Warehouse

    Henderson, Rory; Day-Lewis, Frederick D.; Abarca, Elena; Harvey, Charles F.; Karam, Hanan N.; Liu, Lanbo; Lane, John W.

    2010-01-01

    Electrical resistivity imaging has been used in coastal settings to characterize fresh submarine groundwater discharge and the position of the freshwater/salt-water interface because of the relation of bulk electrical conductivity to pore-fluid conductivity, which in turn is a function of salinity. Interpretation of tomograms for hydrologic processes is complicated by inversion artifacts, uncertainty associated with survey geometry limitations, measurement errors, and choice of regularization method. Variation of seawater over tidal cycles poses unique challenges for inversion. The capabilities and limitations of resistivity imaging are presented for characterizing the distribution of freshwater and saltwater beneath a beach. The experimental results provide new insight into fresh submarine groundwater discharge at Waquoit Bay National Estuarine Research Reserve, East Falmouth, Massachusetts (USA). Tomograms from the experimental data indicate that fresh submarine groundwater discharge may shut down at high tide, whereas temperature data indicate that the discharge continues throughout the tidal cycle. Sensitivity analysis and synthetic modeling provide insight into resolving power in the presence of a time-varying saline water layer. In general, vertical electrodes and cross-hole measurements improve the inversion results regardless of the tidal level, whereas the resolution of surface arrays is more sensitive to time-varying saline water layer.

  13. Real Time Search Algorithm for Observation Outliers During Monitoring Engineering Constructions

    NASA Astrophysics Data System (ADS)

    Latos, Dorota; Kolanowski, Bogdan; Pachelski, Wojciech; Sołoducha, Ryszard

    2017-12-01

    Real time monitoring of engineering structures in case of an emergency of disaster requires collection of a large amount of data to be processed by specific analytical techniques. A quick and accurate assessment of the state of the object is crucial for a probable rescue action. One of the more significant evaluation methods of large sets of data, either collected during a specified interval of time or permanently, is the time series analysis. In this paper presented is a search algorithm for those time series elements which deviate from their values expected during monitoring. Quick and proper detection of observations indicating anomalous behavior of the structure allows to take a variety of preventive actions. In the algorithm, the mathematical formulae used provide maximal sensitivity to detect even minimal changes in the object's behavior. The sensitivity analyses were conducted for the algorithm of moving average as well as for the Douglas-Peucker algorithm used in generalization of linear objects in GIS. In addition to determining the size of deviations from the average it was used the so-called Hausdorff distance. The carried out simulation and verification of laboratory survey data showed that the approach provides sufficient sensitivity for automatic real time analysis of large amount of data obtained from different and various sensors (total stations, leveling, camera, radar).

  14. Functional genomic analysis of drug sensitivity pathways to guide adjuvant strategies in breast cancer

    PubMed Central

    Swanton, Charles; Szallasi, Zoltan; Brenton, James D; Downward, Julian

    2008-01-01

    The widespread introduction of high throughput RNA interference screening technology has revealed tumour drug sensitivity pathways to common cytotoxics such as paclitaxel, doxorubicin and 5-fluorouracil, targeted agents such as trastuzumab and inhibitors of AKT and Poly(ADP-ribose) polymerase (PARP) as well as endocrine therapies such as tamoxifen. Given the limited power of microarray signatures to predict therapeutic response in associative studies of small clinical trial cohorts, the use of functional genomic data combined with expression or sequence analysis of genes and microRNAs implicated in drug response in human tumours may provide a more robust method to guide adjuvant treatment strategies in breast cancer that are transferable across different expression platforms and patient cohorts. PMID:18986507

  15. Differences in sensitivity to parenting depending on child temperament: A meta-analysis.

    PubMed

    Slagt, Meike; Dubas, Judith Semon; Deković, Maja; van Aken, Marcel A G

    2016-10-01

    Several models of individual differences in environmental sensitivity postulate increased sensitivity of some individuals to either stressful (diathesis-stress), supportive (vantage sensitivity), or both environments (differential susceptibility). In this meta-analysis we examine whether children vary in sensitivity to parenting depending on their temperament, and if so, which model can best be used to describe this sensitivity pattern. We tested whether associations between negative parenting and negative or positive child adjustment as well as between positive parenting and positive or negative child adjustment would be stronger among children higher on putative sensitivity markers (difficult temperament, negative emotionality, surgency, and effortful control). Longitudinal studies with children up to 18 years (k = 105 samples from 84 studies, Nmean = 6,153) that reported on a parenting-by-temperament interaction predicting child adjustment were included. We found 235 independent effect sizes for associations between parenting and child adjustment. Results showed that children with a more difficult temperament (compared with those with a more easy temperament) were more vulnerable to negative parenting, but also profited more from positive parenting, supporting the differential susceptibility model. Differences in susceptibility were expressed in externalizing and internalizing problems and in social and cognitive competence. Support for differential susceptibility for negative emotionality was, however, only present when this trait was assessed during infancy. Surgency and effortful control did not consistently moderate associations between parenting and child adjustment, providing little support for differential susceptibility, diathesis-stress, or vantage sensitivity models. Finally, parenting-by-temperament interactions were more pronounced when parenting was assessed using observations compared to questionnaires. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. PEM-West trajectory climatology and photochemical model sensitivity study prepared using retrospective meteorological data

    NASA Technical Reports Server (NTRS)

    Merrill, John T.; Rodriguez, Jose M.

    1991-01-01

    Trajectory and photochemical model calculations based on retrospective meteorological data for the operations areas of the NASA Pacific Exploratory Mission (PEM)-West mission are summarized. The trajectory climatology discussed here is intended to provide guidance for flight planning and initial data interpretation during the field phase of the expedition by indicating the most probable path air parcels are likely to take to reach various points in the area. The photochemical model calculations which are discussed indicate the sensitivity of the chemical environment to various initial chemical concentrations and to conditions along the trajectory. In the post-expedition analysis these calculations will be used to provide a climatological context for the meteorological conditions which are encountered in the field.

  17. Integrated analysis of rice transcriptomic and metabolomic responses to elevated night temperatures identifies sensitivity- and tolerance-related profiles.

    PubMed

    Glaubitz, Ulrike; Li, Xia; Schaedel, Sandra; Erban, Alexander; Sulpice, Ronan; Kopka, Joachim; Hincha, Dirk K; Zuther, Ellen

    2017-01-01

    Transcript and metabolite profiling were performed on leaves from six rice cultivars under high night temperature (HNT) condition. Six genes were identified as central for HNT response encoding proteins involved in transcription regulation, signal transduction, protein-protein interactions, jasmonate response and the biosynthesis of secondary metabolites. Sensitive cultivars showed specific changes in transcript abundance including abiotic stress responses, changes of cell wall-related genes, of ABA signaling and secondary metabolism. Additionally, metabolite profiles revealed a highly activated TCA cycle under HNT and concomitantly increased levels in pathways branching off that could be corroborated by enzyme activity measurements. Integrated data analysis using clustering based on one-dimensional self-organizing maps identified two profiles highly correlated with HNT sensitivity. The sensitivity profile included genes of the functional bins abiotic stress, hormone metabolism, cell wall, signaling, redox state, transcription factors, secondary metabolites and defence genes. In the tolerance profile, similar bins were affected with slight differences in hormone metabolism and transcription factor responses. Metabolites of the two profiles revealed involvement of GABA signaling, thus providing a link to the TCA cycle status in sensitive cultivars and of myo-inositol as precursor for inositol phosphates linking jasmonate signaling to the HNT response specifically in tolerant cultivars. © 2016 John Wiley & Sons Ltd.

  18. Interventional MRI: tapering improves the distal sensitivity of the loopless antenna.

    PubMed

    Qian, Di; El-Sharkawy, AbdEl-Monem M; Atalar, Ergin; Bottomley, Paul A

    2010-03-01

    The "loopless antenna" is an interventional MRI detector consisting of a tuned coaxial cable and an extended inner conductor or "whip". A limitation is the poor sensitivity afforded at, and immediately proximal to, its distal end, which is exacerbated by the extended whip length when the whip is uniformly insulated. It is shown here that tapered insulation dramatically improves the distal sensitivity of the loopless antenna by pushing the current sensitivity toward the tip. The absolute signal-to-noise ratio is numerically computed by the electromagnetic method-of-moments for three resonant 3-T antennae with no insulation, uniform insulation, and with linearly tapered insulation. The analysis shows that tapered insulation provides an approximately 400% increase in signal-to-noise ratio in trans-axial planes 1 cm from the tip and a 16-fold increase in the sensitive area as compared to an equivalent, uniformly insulated antenna. These findings are directly confirmed by phantom experiments and by MRI of an aorta specimen. The results demonstrate that numerical electromagnetic signal-to-noise ratio analysis can accurately predict the loopless detector's signal-to-noise ratio and play a central role in optimizing its design. The manifold improvement in distal signal-to-noise ratio afforded by redistributing the insulation should improve the loopless antenna's utility for interventional MRI. (c) 2010 Wiley-Liss, Inc.

  19. Sensitivity Analysis of Mechanical Parameters of Different Rock Layers to the Stability of Coal Roadway in Soft Rock Strata

    PubMed Central

    Zhao, Zeng-hui; Wang, Wei-ming; Gao, Xin; Yan, Ji-xing

    2013-01-01

    According to the geological characteristics of Xinjiang Ili mine in western area of China, a physical model of interstratified strata composed of soft rock and hard coal seam was established. Selecting the tunnel position, deformation modulus, and strength parameters of each layer as influencing factors, the sensitivity coefficient of roadway deformation to each parameter was firstly analyzed based on a Mohr-Columb strain softening model and nonlinear elastic-plastic finite element analysis. Then the effect laws of influencing factors which showed high sensitivity were further discussed. Finally, a regression model for the relationship between roadway displacements and multifactors was obtained by equivalent linear regression under multiple factors. The results show that the roadway deformation is highly sensitive to the depth of coal seam under the floor which should be considered in the layout of coal roadway; deformation modulus and strength of coal seam and floor have a great influence on the global stability of tunnel; on the contrary, roadway deformation is not sensitive to the mechanical parameters of soft roof; roadway deformation under random combinations of multi-factors can be deduced by the regression model. These conclusions provide theoretical significance to the arrangement and stability maintenance of coal roadway. PMID:24459447

  20. Field-sensitivity To Rheological Parameters

    NASA Astrophysics Data System (ADS)

    Freund, Jonathan; Ewoldt, Randy

    2017-11-01

    We ask this question: where in a flow is a quantity of interest Q quantitatively sensitive to the model parameters θ-> describing the rheology of the fluid? This field sensitivity is computed via the numerical solution of the adjoint flow equations, as developed to expose the target sensitivity δQ / δθ-> (x) via the constraint of satisfying the flow equations. Our primary example is a sphere settling in Carbopol, for which we have experimental data. For this Carreau-model configuration, we simultaneously calculate how much a local change in the fluid intrinsic time-scale λ, limit-viscosities ηo and η∞, and exponent n would affect the drag D. Such field sensitivities can show where different fluid physics in the model (time scales, elastic versus viscous components, etc.) are important for the target observable and generally guide model refinement based on predictive goals. In this case, the computational cost of solving the local sensitivity problem is negligible relative to the flow. The Carreau-fluid/sphere example is illustrative; the utility of field sensitivity is in the design and analysis of less intuitive flows, for which we provide some additional examples.

  1. Evaluation of radioisotope tracer and activation analysis techniques for contamination monitoring in space environment simulation chambers

    NASA Technical Reports Server (NTRS)

    Smathers, J. B.; Kuykendall, W. E., Jr.; Wright, R. E., Jr.; Marshall, J. R.

    1973-01-01

    Radioisotope measurement techniques and neutron activation analysis are evaluated for use in identifying and locating contamination sources in space environment simulation chambers. The alpha range method allows the determination of total contaminant concentration in vapor state and condensate state. A Cf-252 neutron activation analysis system for detecting oils and greases tagged with stable elements is described. While neutron activation analysis of tagged contaminants offers specificity, an on-site system is extremely costly to implement and provides only marginal detection sensitivity under even the most favorable conditions.

  2. Mapping and analysis of phosphorylation sites: a quick guide for cell biologists

    PubMed Central

    Dephoure, Noah; Gould, Kathleen L.; Gygi, Steven P.; Kellogg, Douglas R.

    2013-01-01

    A mechanistic understanding of signaling networks requires identification and analysis of phosphorylation sites. Mass spectrometry offers a rapid and highly sensitive approach to mapping phosphorylation sites. However, mass spectrometry has significant limitations that must be considered when planning to carry out phosphorylation-site mapping. Here we provide an overview of key information that should be taken into consideration before beginning phosphorylation-site analysis, as well as a step-by-step guide for carrying out successful experiments. PMID:23447708

  3. Analysis of multimode fiber bundles for endoscopic spectral-domain optical coherence tomography

    PubMed Central

    Risi, Matthew D.; Makhlouf, Houssine; Rouse, Andrew R.; Gmitro, Arthur F.

    2016-01-01

    A theoretical analysis of the use of a fiber bundle in spectral-domain optical coherence tomography (OCT) systems is presented. The fiber bundle enables a flexible endoscopic design and provides fast, parallelized acquisition of the OCT data. However, the multimode characteristic of the fibers in the fiber bundle affects the depth sensitivity of the imaging system. A description of light interference in a multimode fiber is presented along with numerical simulations and experimental studies to illustrate the theoretical analysis. PMID:25967012

  4. User’s Manual for SEEK TALK Full Scale Engineering Development Life Cycle Cost (LCC) Model. Volume II. Model Equations and Model Operations.

    DTIC Science & Technology

    1981-04-01

    LIFE CYCLE COST (LCC) LCC SENSITIVITY ANALYSIS LCC MODE , REPAIR LEVEL ANALYSIS (RLA) 20 ABSTRACT (Cnn tlnue on reverse side It necessary and Identify... level analysis capability. Next it provides values for Air Force input parameters and instructions for contractor inputs, general operating...Maintenance Manhour Requirements 39 5.1.4 Calculation of Repair Level Fractions 43 5.2 Cost Element Equations 47 5.2.1 Production Cost Element 47

  5. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  6. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  7. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  8. A mixture theory model of fluid and solute transport in the microvasculature of normal and malignant tissues. II: Factor sensitivity analysis, calibration, and validation.

    PubMed

    Schuff, M M; Gore, J P; Nauman, E A

    2013-12-01

    The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.

  9. The population benefit of evidence-based radiotherapy: 5-Year local control and overall survival benefits.

    PubMed

    Hanna, T P; Shafiq, J; Delaney, G P; Vinod, S K; Thompson, S R; Barton, M B

    2018-02-01

    To describe the population benefit of radiotherapy in a high-income setting if evidence-based guidelines were routinely followed. Australian decision tree models were utilized. Radiotherapy alone (RT) benefit was defined as the absolute proportional benefit of radiotherapy compared with no treatment for radical indications, and of radiotherapy over surgery alone for adjuvant indications. Chemoradiotherapy (CRT) benefit was the absolute incremental benefit of concurrent chemoradiotherapy over RT. Five-year local control (LC) and overall survival (OS) benefits were measured. Citation databases were systematically queried for benefit data. Meta-analysis and sensitivity analysis were performed. 48% of all cancer patients have indications for radiotherapy, 34% curative and 14% palliative. RT provides 5-year LC benefit in 10.4% of all cancer patients (95% Confidence Interval 9.3, 11.8) and 5-year OS benefit in 2.4% (2.1, 2.7). CRT provides 5-year LC benefit in an additional 0.6% of all cancer patients (0.5, 0.6), and 5-year OS benefit for an additional 0.3% (0.2, 0.4). RT benefit was greatest for head and neck (LC 32%, OS 16%), and cervix (LC 33%, OS 18%). CRT LC benefit was greatest for rectum (6%) and OS for cervix (3%) and brain (3%). Sensitivity analysis confirmed a robust model. Radiotherapy provides significant 5-year LC and OS benefits as part of evidence-based cancer care. CRT provides modest additional benefits. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  10. Constitutive BAK activation as a determinant of drug sensitivity in malignant lymphohematopoietic cells

    PubMed Central

    Dai, Haiming; Ding, Husheng; Meng, X. Wei; Peterson, Kevin L.; Schneider, Paula A.; Karp, Judith E.; Kaufmann, Scott H.

    2015-01-01

    Mitochondrial outer membrane permeabilization (MOMP), a key step in the intrinsic apoptotic pathway, is incompletely understood. Current models emphasize the role of BH3-only BCL2 family members in BAX and BAK activation. Here we demonstrate concentration-dependent BAK autoactivation under cell-free conditions and provide evidence that this autoactivation plays a key role in regulating the intrinsic apoptotic pathway in intact cells. In particular, we show that up to 80% of BAK (but not BAX) in lymphohematopoietic cell lines is oligomerized and bound to anti-apoptotic BCL2 family members in the absence of exogenous death stimuli. The extent of this constitutive BAK oligomerization is diminished by BAK knockdown and unaffected by BIM or PUMA down-regulation. Further analysis indicates that sensitivity of cells to BH3 mimetics reflects the identity of the anti-apoptotic proteins to which BAK is constitutively bound, with extensive BCLXL•BAK complexes predicting navitoclax sensitivity, and extensive MCL1•BAK complexes predicting A1210477 sensitivity. Moreover, high BAK expression correlates with sensitivity of clinical acute myelogenous leukemia to chemotherapy, whereas low BAK levels correlate with resistance and relapse. Collectively, these results inform current understanding of MOMP and provide new insight into the ability of BH3 mimetics to induce apoptosis without directly activating BAX or BAK. PMID:26494789

  11. Wavelet-based analysis of gastric microcirculation in rats with ulcer bleedings

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Rodionov, M. A.; Pavlova, O. N.; Semyachkina-Glushkovskaya, O. V.; Berdnikova, V. A.; Kuznetsova, Ya. V.; Semyachkin-Glushkovskij, I. A.

    2012-03-01

    Studying of nitric oxide (NO) dependent mechanisms of regulation of microcirculation in a stomach can provide important diagnostic markers of the development of stress-induced ulcer bleedings. In this work we use a multiscale analysis based on the discrete wavelet-transform to characterize a latent stage of illness formation in rats. A higher sensitivity of stomach vessels to the NO-level in ill rats is discussed.

  12. Sensitivity of a numerical wave model on wind re-analysis datasets

    NASA Astrophysics Data System (ADS)

    Lavidas, George; Venugopal, Vengatesan; Friedrich, Daniel

    2017-03-01

    Wind is the dominant process for wave generation. Detailed evaluation of metocean conditions strengthens our understanding of issues concerning potential offshore applications. However, the scarcity of buoys and high cost of monitoring systems pose a barrier to properly defining offshore conditions. Through use of numerical wave models, metocean conditions can be hindcasted and forecasted providing reliable characterisations. This study reports the sensitivity of wind inputs on a numerical wave model for the Scottish region. Two re-analysis wind datasets with different spatio-temporal characteristics are used, the ERA-Interim Re-Analysis and the CFSR-NCEP Re-Analysis dataset. Different wind products alter results, affecting the accuracy obtained. The scope of this study is to assess different available wind databases and provide information concerning the most appropriate wind dataset for the specific region, based on temporal, spatial and geographic terms for wave modelling and offshore applications. Both wind input datasets delivered results from the numerical wave model with good correlation. Wave results by the 1-h dataset have higher peaks and lower biases, in expense of a high scatter index. On the other hand, the 6-h dataset has lower scatter but higher biases. The study shows how wind dataset affects the numerical wave modelling performance, and that depending on location and study needs, different wind inputs should be considered.

  13. Developing graduate student competency in providing culturally sensitive end of life care in critical care environments - a pilot study of a teaching innovation.

    PubMed

    Northam, Holly L; Hercelinskyj, Gylo; Grealish, Laurie; Mak, Anita S

    2015-11-01

    Australia's immigration policy has generated a rich diverse cultural community of staff and patients in critical care environments. Many different cultural perspectives inform individual actions in the context of critical care, including the highly sensitive area of end of life care, with nurses feeling poorly prepared to provide culturally sensitive end of life care. This article describes and evaluates the effectiveness of an educational innovation designed to develop graduate-level critical care nurses' capacity for effective interpersonal communication, as members of a multi-disciplinary team in providing culturally sensitive end-of-life care. A mixed method pilot study was conducted using a curriculum innovation intervention informed by The Excellence in Cultural Experiential Learning and Leadership Program (EXCELL),(1) which is a higher education intervention which was applied to develop the nurses' intercultural communication skills. 12 graduate nursing students studying critical care nursing participated in the study. 42% (n=5) of the participants were from an international background. Information about students' cultural learning was recorded before and after the intervention, using a cultural learning development scale. Student discussions of end of life care were recorded at Week 2 and 14 of the curriculum. The quantitative data was analysed using descriptive statistical analysis and qualitative data was thematically analysed. Students demonstrated an increase in cultural learning in a range of areas in the pre-post surveys including understandings of cultural diversity, interpersonal skills, cross cultural interactions and participating in multicultural groups. Thematic analysis of the end of life discussions revealed an increase in the levels of nurse confidence in approaching end of life care in critical care environments. The EXCELL program provides an effective and supportive educational framework to increase graduate nurses' cultural learning development and competence to manage culturally complex clinical issues such as end of life care, and is recommended as a framework for health care students to learn the skills required to provide culturally competent care in a range of culturally complex health care settings. Copyright © 2015 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.

  14. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  15. Tropospheric Ozone Near-Nadir-Viewing IR Spectral Sensitivity and Ozone Measurements from NAST-I

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Larar, Allen M.

    2001-01-01

    Infrared ozone spectra from near nadir observations have provided atmospheric ozone information from the sensor to the Earth's surface. Simulations of the NPOESS Airborne Sounder Testbed-Interferometer (NAST-I) from the NASA ER-2 aircraft (approximately 20 km altitude) with a spectral resolution of 0.25/cm were used for sensitivity analysis. The spectral sensitivity of ozone retrievals to uncertainties in atmospheric temperature and water vapor is assessed in order to understand the relationship between the IR emissions and the atmospheric state. In addition, ozone spectral radiance sensitivity to its ozone layer densities and radiance weighting functions reveals the limit of the ozone profile retrieval accuracy from NAST-I measurements. Statistical retrievals of ozone with temperature and moisture retrievals from NAST-I spectra have been investigated and the preliminary results from NAST-I field campaigns are presented.

  16. Streamflow sensitivity to water storage changes across Europe

    NASA Astrophysics Data System (ADS)

    Berghuijs, Wouter R.; Hartmann, Andreas; Woods, Ross A.

    2016-03-01

    Terrestrial water storage is the primary source of river flow. We introduce storage sensitivity of streamflow (ɛS), which for a given flow rate indicates the relative change in streamflow per change in catchment water storage. ɛS can be directly derived from streamflow observations. Analysis of 725 catchments in Europe reveals that ɛS is high in, e.g., parts of Spain, England, Germany, and Denmark, whereas flow regimes in parts of the Alps are more resilient (that is, less sensitive) to storage changes. A regional comparison of ɛS with observations indicates that ɛS is significantly correlated with variability of low (R2 = 0.41), median (R2 = 0.27), and high flow conditions (R2 = 0.35). Streamflow sensitivity provides new guidance for a changing hydrosphere where groundwater abstraction and climatic changes are altering water storage and flow regimes.

  17. Active matrix-based collection of airborne analytes: an analyte recording chip providing exposure history and finger print.

    PubMed

    Fang, Jun; Park, Se-Chul; Schlag, Leslie; Stauden, Thomas; Pezoldt, Jörg; Jacobs, Heiko O

    2014-12-03

    In the field of sensors that target the detection of airborne analytes, Corona/lens-based-collection provides a new path to achieve a high sensitivity. An active-matrix-based analyte collection approach referred to as "airborne analyte memory chip/recorder" is demonstrated, which takes and stores airborne analytes in a matrix to provide an exposure history for off-site analysis. © 2014 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Case Study: Sensitivity Analysis of the Barataria Basin Barrier Shoreline Wetland Value Assessment Model

    DTIC Science & Technology

    2014-07-01

    wetlands in providing resting, foraging , breeding, and nursery habitat to a diverse assemblage of fish and wildlife species in coastal Louisiana (EWG...services, including fish and wildlife production, storm damage reduction, and recreation. Federal, state, and local partners have jointly pursued large...habitat units” (HU) provided by a given alternative. Whereas traditional HEP models focused on specific taxa, WVA assesses the fish and wildlife community

  19. The Behavioral Approach System (BAS) Model of Vulnerability to Bipolar Disorder: Evidence of a Continuum in BAS Sensitivity across Adolescence.

    PubMed

    Liu, Richard T; Burke, Taylor A; Abramson, Lyn Y; Alloy, Lauren B

    2017-11-04

    Behavioral Approach System (BAS) sensitivity has been implicated in the development of a variety of different psychiatric disorders. Prominent among these in the empirical literature are bipolar spectrum disorders (BSDs). Given that adolescence represents a critical developmental stage of risk for the onset of BSDs, it is important to clarify the latent structure of BAS sensitivity in this period of development. A statistical approach especially well-suited for delineating the latent structure of BAS sensitivity is taxometric analysis, which is designed to evaluate whether the latent structure of a construct is taxonic (i.e., categorical) or dimensional (i.e., continuous) in nature. The current study applied three mathematically non-redundant taxometric procedures (i.e., MAMBAC, MAXEIG, and L-Mode) to a large community sample of adolescents (n = 12,494) who completed two separate measures of BAS sensitivity: the BIS/BAS Scales Carver and White (Journal of Personality and Social Psychology, 67, 319-333. 1994) and the Sensitivity to Reward and Sensitivity to Punishment Questionnaire (Torrubia et al. Personality and Individual Differences, 31, 837-862. 2001). Given the significant developmental changes in reward sensitivity that occur across adolescence, the current investigation aimed to provide a fine-grained evaluation of the data by performing taxometric analyses at an age-by-age level (14-19 years; n for each age ≥ 883). Results derived from taxometric procedures, across all ages tested, were highly consistent, providing strong evidence that BAS sensitivity is best conceptualized as dimensional in nature. Thus, the findings suggest that BAS-related vulnerability to BSDs exists along a continuum of severity, with no natural cut-point qualitatively differentiating high- and low-risk adolescents. Clinical and research implications for the assessment of BSD-related vulnerability are discussed.

  20. Reach-scale channel sensitivity to multiple human activities and natural events: Lower Santa Clara River, California, USA

    NASA Astrophysics Data System (ADS)

    Downs, Peter W.; Dusterhoff, Scott R.; Sears, William A.

    2013-05-01

    Understanding the cumulative impact of natural and human influences on the sensitivity of channel morphodynamics, a relative measure between the drivers for change and the magnitude of channel response, requires an approach that accommodates spatial and temporal variability in the suite of primary stressors. Multiple historical data sources were assembled to provide a reach-scale analysis of the lower Santa Clara River (LSCR) in Ventura County, California, USA. Sediment supply is naturally high due to tectonic activity, earthquake-generated landslides, wildfires, and high magnitude flow events during El Niño years. Somewhat typically for the region, the catchment has been subject to four reasonably distinct land use and resource management combinations since European-American settlement. When combined with analysis of channel morphological response (quantifiable since ca. 1930), reach-scale and temporal differences in channel sensitivity become apparent. Downstream reaches have incised on average 2.4 m and become narrower by almost 50% with changes focused in a period of highly sensitive response after about 1950 followed by forced insensitivity caused by structural flood embankments and a significant grade control structure. In contrast, the middle reaches have been responsive but are morphologically resilient, and the upstream reaches show a mildly sensitive aggradational trend. Superimposing the natural and human drivers for change reveals that large scale stressors (related to ranching and irrigation) have been replaced over time by a suite of stressors operating at multiple spatial scales. Lower reaches have been sensitive primarily to 'local' scale impacts (urban growth, flood control, and aggregate mining) whereas, upstream, catchment-scale influences still prevail (including flow regulation and climate-driven sediment supply factors). These factors illustrate the complexity inherent to cumulative impact assessment in fluvial systems, provide evidence for a distinct Anthropocene fluvial response, and underpin the enormity of the challenge faced in trying to sustainably manage and restore rivers.

  1. Extended Testability Analysis Tool

    NASA Technical Reports Server (NTRS)

    Melcher, Kevin; Maul, William A.; Fulton, Christopher

    2012-01-01

    The Extended Testability Analysis (ETA) Tool is a software application that supports fault management (FM) by performing testability analyses on the fault propagation model of a given system. Fault management includes the prevention of faults through robust design margins and quality assurance methods, or the mitigation of system failures. Fault management requires an understanding of the system design and operation, potential failure mechanisms within the system, and the propagation of those potential failures through the system. The purpose of the ETA Tool software is to process the testability analysis results from a commercial software program called TEAMS Designer in order to provide a detailed set of diagnostic assessment reports. The ETA Tool is a command-line process with several user-selectable report output options. The ETA Tool also extends the COTS testability analysis and enables variation studies with sensor sensitivity impacts on system diagnostics and component isolation using a single testability output. The ETA Tool can also provide extended analyses from a single set of testability output files. The following analysis reports are available to the user: (1) the Detectability Report provides a breakdown of how each tested failure mode was detected, (2) the Test Utilization Report identifies all the failure modes that each test detects, (3) the Failure Mode Isolation Report demonstrates the system s ability to discriminate between failure modes, (4) the Component Isolation Report demonstrates the system s ability to discriminate between failure modes relative to the components containing the failure modes, (5) the Sensor Sensor Sensitivity Analysis Report shows the diagnostic impact due to loss of sensor information, and (6) the Effect Mapping Report identifies failure modes that result in specified system-level effects.

  2. Independent component analysis-based source-level hyperlink analysis for two-person neuroscience studies

    NASA Astrophysics Data System (ADS)

    Zhao, Yang; Dai, Rui-Na; Xiao, Xiang; Zhang, Zong; Duan, Lian; Li, Zheng; Zhu, Chao-Zhe

    2017-02-01

    Two-person neuroscience, a perspective in understanding human social cognition and interaction, involves designing immersive social interaction experiments as well as simultaneously recording brain activity of two or more subjects, a process termed "hyperscanning." Using newly developed imaging techniques, the interbrain connectivity or hyperlink of various types of social interaction has been revealed. Functional near-infrared spectroscopy (fNIRS)-hyperscanning provides a more naturalistic environment for experimental paradigms of social interaction and has recently drawn much attention. However, most fNIRS-hyperscanning studies have computed hyperlinks using sensor data directly while ignoring the fact that the sensor-level signals contain confounding noises, which may lead to a loss of sensitivity and specificity in hyperlink analysis. In this study, on the basis of independent component analysis (ICA), a source-level analysis framework is proposed to investigate the hyperlinks in a fNIRS two-person neuroscience study. The performance of five widely used ICA algorithms in extracting sources of interaction was compared in simulative datasets, and increased sensitivity and specificity of hyperlink analysis by our proposed method were demonstrated in both simulative and real two-person experiments.

  3. Comparing efficacy of reduced-toxicity allogeneic hematopoietic cell transplantation with conventional chemo-(immuno) therapy in patients with relapsed or refractory CLL: a Markov decision analysis.

    PubMed

    Kharfan-Dabaja, M A; Pidala, J; Kumar, A; Terasawa, T; Djulbegovic, B

    2012-09-01

    Despite therapeutic advances, relapsed/refractory CLL, particularly after fludarabine-based regimens, remains a major challenge for which optimal therapy is undefined. No randomized comparative data exist to suggest the superiority of reduced-toxicity allogeneic hematopoietic cell transplantation (RT-allo-HCT) over conventional chemo-(immuno) therapy (CCIT). By using estimates from a systematic review and by meta-analysis of available published evidence, we constructed a Markov decision model to examine these competing modalities. Cohort analysis demonstrated superior outcome for RT-allo-HCT, with a 10-month overall life expectancy (and 6-month quality-adjusted life expectancy (QALE)) advantage over CCIT. Although the model was sensitive to changes in base-case assumptions and transition probabilities, RT-allo-HCT provided superior overall life expectancy through a range of values supported by the meta-analysis. QALE was superior for RT-allo-HCT compared with CCIT. This conclusion was sensitive to change in the anticipated state utility associated with the post-allogeneic HCT state; however, RT-allo-HCT remained the optimal strategy for values supported by existing literature. This analysis provides a quantitative comparison of outcomes between RT-allo-HCT and CCIT for relapsed/refractory CLL in the absence of randomized comparative trials. Confirmation of these findings requires a prospective randomized trial, which compares the most effective RT-allo-HCT and CCIT regimens for relapsed/refractory CLL.

  4. iTOUGH2 V6.5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, Stefan A.

    2010-11-01

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less

  5. Nanomaterials for Electrochemical Immunosensing

    PubMed Central

    Pan, Mingfei; Gu, Ying; Yun, Yaguang; Li, Min; Jin, Xincui; Wang, Shuo

    2017-01-01

    Electrochemical immunosensors resulting from a combination of the traditional immunoassay approach with modern biosensors and electrochemical analysis constitute a current research hotspot. They exhibit both the high selectivity characteristics of immunoassays and the high sensitivity of electrochemical analysis, along with other merits such as small volume, convenience, low cost, simple preparation, and real-time on-line detection, and have been widely used in the fields of environmental monitoring, medical clinical trials and food analysis. Notably, the rapid development of nanotechnology and the wide application of nanomaterials have provided new opportunities for the development of high-performance electrochemical immunosensors. Various nanomaterials with different properties can effectively solve issues such as the immobilization of biological recognition molecules, enrichment and concentration of trace analytes, and signal detection and amplification to further enhance the stability and sensitivity of the electrochemical immunoassay procedure. This review introduces the working principles and development of electrochemical immunosensors based on different signals, along with new achievements and progress related to electrochemical immunosensors in various fields. The importance of various types of nanomaterials for improving the performance of electrochemical immunosensor is also reviewed to provide a theoretical basis and guidance for the further development and application of nanomaterials in electrochemical immunosensors. PMID:28475158

  6. Some aspects of analytical chemistry as applied to water quality assurance techniques for reclaimed water: The potential use of X-ray fluorescence spectrometry for automated on-line fast real-time simultaneous multi-component analysis of inorganic pollutants in reclaimed water

    NASA Technical Reports Server (NTRS)

    Ling, A. C.; Macpherson, L. H.; Rey, M.

    1981-01-01

    The potential use of isotopically excited energy dispersive X-ray fluorescence (XRF) spectrometry for automated on line fast real time (5 to 15 minutes) simultaneous multicomponent (up to 20) trace (1 to 10 parts per billion) analysis of inorganic pollutants in reclaimed water was examined. Three anionic elements (chromium 6, arsenic and selenium) were studied. The inherent lack of sensitivity of XRF spectrometry for these elements mandates use of a preconcentration technique and various methods were examined, including: several direct and indirect evaporation methods; ion exchange membranes; selective and nonselective precipitation; and complexation processes. It is shown tha XRF spectrometry itself is well suited for automated on line quality assurance, and can provide a nondestructive (and thus sample storage and repeat analysis capabilities) and particularly convenient analytical method. Further, the use of an isotopically excited energy dispersive unit (50 mCi Cd-109 source) coupled with a suitable preconcentration process can provide sufficient sensitivity to achieve the current mandated minimum levels of detection without the need for high power X-ray generating tubes.

  7. Targeted Quantitation of Proteins by Mass Spectrometry

    PubMed Central

    2013-01-01

    Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement. PMID:23517332

  8. Targeted quantitation of proteins by mass spectrometry.

    PubMed

    Liebler, Daniel C; Zimmerman, Lisa J

    2013-06-04

    Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement.

  9. Additional EIPC Study Analysis: Interim Report on High Priority Topics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W

    Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.« less

  10. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  11. High Sensitivity Analysis of Nanoliter Volumes of Volatile and Nonvolatile Compounds using Matrix Assisted Ionization (MAI) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Hoang, Khoa; Pophristic, Milan; Horan, Andrew J.; Johnston, Murray V.; McEwen, Charles N.

    2016-10-01

    First results are reported using a simple, fast, and reproducible matrix-assisted ionization (MAI) sample introduction method that provides substantial improvements relative to previously published MAI methods. The sensitivity of the new MAI methods, which requires no laser, high voltage, or nebulizing gas, is comparable to those reported for MALDI-TOF and n-ESI. High resolution full acquisition mass spectra having low chemical background are acquired from low nanoliters of solution using only a few femtomoles of analyte. The limit-of-detection for angiotensin II is less than 50 amol on an Orbitrap Exactive mass spectrometer. Analysis of peptides, including a bovine serum albumin digest, and drugs, including drugs in urine without a purification step, are reported using a 1 μL zero dead volume syringe in which only the analyte solution wetting the walls of the syringe needle is used in the analysis.

  12. Optimal replenishment and credit policy in supply chain inventory model under two levels of trade credit with time- and credit-sensitive demand involving default risk

    NASA Astrophysics Data System (ADS)

    Mahata, Puspita; Mahata, Gour Chandra; Kumar De, Sujit

    2018-03-01

    Traditional supply chain inventory modes with trade credit usually only assumed that the up-stream suppliers offered the down-stream retailers a fixed credit period. However, in practice the retailers will also provide a credit period to customers to promote the market competition. In this paper, we formulate an optimal supply chain inventory model under two levels of trade credit policy with default risk consideration. Here, the demand is assumed to be credit-sensitive and increasing function of time. The major objective is to determine the retailer's optimal credit period and cycle time such that the total profit per unit time is maximized. The existence and uniqueness of the optimal solution to the presented model are examined, and an easy method is also shown to find the optimal inventory policies of the considered problem. Finally, numerical examples and sensitive analysis are presented to illustrate the developed model and to provide some managerial insights.

  13. Image analysis of neuropsychological test responses

    NASA Astrophysics Data System (ADS)

    Smith, Stephen L.; Hiller, Darren L.

    1996-04-01

    This paper reports recent advances in the development of an automated approach to neuropsychological testing. High performance image analysis algorithms have been developed as part of a convenient and non-invasive computer-based system to provide an objective assessment of patient responses to figure-copying tests. Tests of this type are important in determining the neurological function of patients following stroke through evaluation of their visuo-spatial performance. Many conventional neuropsychological tests suffer from the serious drawback that subjective judgement on the part of the tester is required in the measurement of the patient's response which leads to a qualitative neuropsychological assessment that can be both inconsistent and inaccurate. Results for this automated approach are presented for three clinical populations: patients suffering right hemisphere stroke are compared with adults with no known neurological disorder and a population comprising normal school children of 11 years is presented to demonstrate the sensitivity of the technique. As well as providing a more reliable and consistent diagnosis this technique is sufficiently sensitive to monitor a patient's progress over a period of time and will provide the neuropsychologist with a practical means of evaluating the effectiveness of therapy or medication administered as part of a rehabilitation program.

  14. Ecosystem Services and Climate Change Considerations for ...

    EPA Pesticide Factsheets

    Freshwater habitats provide fishable, swimmable and drinkable resources and are a nexus of geophysical and biological processes. These processes in turn influence the persistence and sustainability of populations, communities and ecosystems. Climate change and landuse change encompass numerous stressors of potential exposure, including the introduction of toxic contaminants, invasive species, and disease in addition to physical drivers such as temperature and hydrologic regime. A systems approach that includes the scientific and technologic basis of assessing the health of ecosystems is needed to effectively protect human health and the environment. The Integrated Environmental Modeling Framework “iemWatersheds” has been developed as a consistent and coherent means of forecasting the cumulative impact of co-occurring stressors. The Framework consists of three facilitating technologies: Data for Environmental Modeling (D4EM) that automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems (FRAMES) that manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) that provides post-processing and analysis of model outputs, including uncertainty and sensitivity analysis. Five models are linked within the Framework to provide multimedia simulation capabilities for hydrology and water quality processes: the Soil Water

  15. A generalized procedure for analyzing sustained and dynamic vocal fold vibrations from laryngeal high-speed videos using phonovibrograms.

    PubMed

    Unger, Jakob; Schuster, Maria; Hecker, Dietmar J; Schick, Bernhard; Lohscheller, Jörg

    2016-01-01

    This work presents a computer-based approach to analyze the two-dimensional vocal fold dynamics of endoscopic high-speed videos, and constitutes an extension and generalization of a previously proposed wavelet-based procedure. While most approaches aim for analyzing sustained phonation conditions, the proposed method allows for a clinically adequate analysis of both dynamic as well as sustained phonation paradigms. The analysis procedure is based on a spatio-temporal visualization technique, the phonovibrogram, that facilitates the documentation of the visible laryngeal dynamics. From the phonovibrogram, a low-dimensional set of features is computed using a principle component analysis strategy that quantifies the type of vibration patterns, irregularity, lateral symmetry and synchronicity, as a function of time. Two different test bench data sets are used to validate the approach: (I) 150 healthy and pathologic subjects examined during sustained phonation. (II) 20 healthy and pathologic subjects that were examined twice: during sustained phonation and a glissando from a low to a higher fundamental frequency. In order to assess the discriminative power of the extracted features, a Support Vector Machine is trained to distinguish between physiologic and pathologic vibrations. The results for sustained phonation sequences are compared to the previous approach. Finally, the classification performance of the stationary analyzing procedure is compared to the transient analysis of the glissando maneuver. For the first test bench the proposed procedure outperformed the previous approach (proposed feature set: accuracy: 91.3%, sensitivity: 80%, specificity: 97%, previous approach: accuracy: 89.3%, sensitivity: 76%, specificity: 96%). Comparing the classification performance of the second test bench further corroborates that analyzing transient paradigms provides clear additional diagnostic value (glissando maneuver: accuracy: 90%, sensitivity: 100%, specificity: 80%, sustained phonation: accuracy: 75%, sensitivity: 80%, specificity: 70%). The incorporation of parameters describing the temporal evolvement of vocal fold vibration clearly improves the automatic identification of pathologic vibration patterns. Furthermore, incorporating a dynamic phonation paradigm provides additional valuable information about the underlying laryngeal dynamics that cannot be derived from sustained conditions. The proposed generalized approach provides a better overall classification performance than the previous approach, and hence constitutes a new advantageous tool for an improved clinical diagnosis of voice disorders. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability

    PubMed Central

    ChariDingari, Narahara; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P.; Kumar, G. Manoj

    2012-01-01

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real world applications, e.g. quality assurance and process monitoring. Specifically, variability in sample, system and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a non-linear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), due to its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data – highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples as well as in related areas of forensic and biological sample analysis. PMID:22292496

  17. Incorporation of support vector machines in the LIBS toolbox for sensitive and robust classification amidst unexpected sample and system variability.

    PubMed

    Dingari, Narahara Chari; Barman, Ishan; Myakalwar, Ashwin Kumar; Tewari, Surya P; Kumar Gundawar, Manoj

    2012-03-20

    Despite the intrinsic elemental analysis capability and lack of sample preparation requirements, laser-induced breakdown spectroscopy (LIBS) has not been extensively used for real-world applications, e.g., quality assurance and process monitoring. Specifically, variability in sample, system, and experimental parameters in LIBS studies present a substantive hurdle for robust classification, even when standard multivariate chemometric techniques are used for analysis. Considering pharmaceutical sample investigation as an example, we propose the use of support vector machines (SVM) as a nonlinear classification method over conventional linear techniques such as soft independent modeling of class analogy (SIMCA) and partial least-squares discriminant analysis (PLS-DA) for discrimination based on LIBS measurements. Using over-the-counter pharmaceutical samples, we demonstrate that the application of SVM enables statistically significant improvements in prospective classification accuracy (sensitivity), because of its ability to address variability in LIBS sample ablation and plasma self-absorption behavior. Furthermore, our results reveal that SVM provides nearly 10% improvement in correct allocation rate and a concomitant reduction in misclassification rates of 75% (cf. PLS-DA) and 80% (cf. SIMCA)-when measurements from samples not included in the training set are incorporated in the test data-highlighting its robustness. While further studies on a wider matrix of sample types performed using different LIBS systems is needed to fully characterize the capability of SVM to provide superior predictions, we anticipate that the improved sensitivity and robustness observed here will facilitate application of the proposed LIBS-SVM toolbox for screening drugs and detecting counterfeit samples, as well as in related areas of forensic and biological sample analysis.

  18. Single-pair fluorescence resonance energy transfer analysis of mRNA transcripts for highly sensitive gene expression profiling in near real time.

    PubMed

    Peng, Zhiyong; Young, Brandon; Baird, Alison E; Soper, Steven A

    2013-08-20

    Expression analysis of mRNAs transcribed from certain genes can be used as important sources of biomarkers for in vitro diagnostics. While the use of reverse transcription quantitative PCR (RT-qPCR) can provide excellent analytical sensitivity for monitoring transcript numbers, more sensitive approaches for expression analysis that can report results in near real-time are needed for many critical applications. We report a novel assay that can provide exquisite limits-of-quantitation and consists of reverse transcription (RT) followed by a ligase detection reaction (LDR) with single-pair fluorescence resonance energy transfer (spFRET) to provide digital readout through molecular counting. For this assay, no PCR was employed, which enabled short assay turnaround times. To facilitate implementation of the assay, a cyclic olefin copolymer (COC) microchip, which was fabricated using hot embossing, was employed to carry out the LDR in a continuous flow format with online single-molecule detection following the LDR. As demonstrators of the assay's utility, MMP-7 mRNA was expression profiled from several colorectal cancer cell lines. It was found that the RT-LDR/spFRET assay produced highly linear calibration plots even in the low copy number regime. Comparison to RT-qPCR indicated a better linearity over the low copy number range investigated (10-10,000 copies) with an R(2) = 0.9995 for RT-LDR/spFRET and R(2) = 0.98 for RT-qPCR. In addition, differentiating between copy numbers of 10 and 50 could be performed with higher confidence using RT-LDR/spFRET. To demonstrate the short assay turnaround times obtainable using the RT-LDR/spFRET assay, a two thermal cycle LDR was carried out on amphiphysin gene transcripts that can serve as important diagnostic markers for ischemic stroke. The ability to supply diagnostic information on possible stroke events in short turnaround times using RT-LDR/spFRET will enable clinicians to treat patients effectively with appropriate time-sensitive therapeutics.

  19. Single-Pair Fret Analysis of mRNA Transcripts for Highly Sensitive Gene Expression Profiling in Near Real Time

    PubMed Central

    Peng, Zhiyong; Young, Brandon; Baird, Alison E.; Soper, Steven A.

    2013-01-01

    Expression analysis of mRNAs transcribed from certain genes can be used as important sources of biomarkers for in vitro diagnostics. While the use of reverse transcription quantitative PCR (RT-qPCR) can provide excellent analytical sensitivity for monitoring transcript numbers, more sensitive approaches for expression analysis that can report results in near real-time are needed for many critical applications. We report a novel assay that can provide exquisite limits-of-quantitation and consists of reverse transcription (RT) followed by a ligase detection reaction (LDR) with single-pair fluorescence resonance energy transfer (spFRET) to provide digital readout through molecular counting. For this assay, no PCR was employed, which enabled short assay turnaround times. To facilitate implementation of the assay, a cyclic olefin copolymer (COC) microchip, which was fabricated using hot embossing, was employed to carry out the LDR in a continuous flow format with on-line single-molecule detection following the LDR. As demonstrators of the assay's utility, MMP-7 mRNA was expression profiled from several colorectal cancer cell lines. It was found that the RT-LDR/spFRET assay produced highly linear calibration plots even in the low copy number regime. Comparison to RT-qPCR indicated a better linearity over the low copy number range investigated (10 − 10,000 copies) with an R2 = 0.9995 for RT-LDR/spFRET and R2 = 0.98 for RT-qPCR. In addition, differentiating between copy numbers of 10 and 50 could be performed with higher confidence using RT-LDR/spFRET. To demonstrate the short assay turnaround times obtainable using the RT-LDR/spFRET assay, a 2 thermal cycle LDR was carried out on amphiphysin gene transcripts that can serve as important diagnostic markers for ischemic stroke. The ability to supply diagnostic information on possible stroke events in short turnaround times using RT-LDR/spFRET will enable clinicians to treat patients effectively with appropriate time-sensitive therapeutics. PMID:23869556

  20. Mild extraction methods using aqueous glucose solution for the analysis of natural dyes in textile artefacts dyed with Dyer's madder (Rubia tinctorum L.).

    PubMed

    Ford, Lauren; Henderson, Robert L; Rayner, Christopher M; Blackburn, Richard S

    2017-03-03

    Madder (Rubia tinctorum L.) has been widely used as a red dye throughout history. Acid-sensitive colorants present in madder, such as glycosides (lucidin primeveroside, ruberythric acid, galiosin) and sensitive aglycons (lucidin), are degraded in the textile back extraction process; in previous literature these sensitive molecules are either absent or present in only low concentrations due to the use of acid in typical textile back extraction processes. Anthraquinone aglycons alizarin and purpurin are usually identified in analysis following harsh back extraction methods, such those using solvent mixtures with concentrated hydrochloric acid at high temperatures. Use of softer extraction techniques potentially allows for dye components present in madder to be extracted without degradation, which can potentially provide more information about the original dye profile, which varies significantly between madder varieties, species and dyeing technique. Herein, a softer extraction method involving aqueous glucose solution was developed and compared to other back extraction techniques on wool dyed with root extract from different varieties of Rubia tinctorum. Efficiencies of the extraction methods were analysed by HPLC coupled with diode array detection. Acidic literature methods were evaluated and they generally caused hydrolysis and degradation of the dye components, with alizarin, lucidin, and purpurin being the main compounds extracted. In contrast, extraction in aqueous glucose solution provides a highly effective method for extraction of madder dyed wool and is shown to efficiently extract lucidin primeveroside and ruberythric acid without causing hydrolysis and also extract aglycons that are present due to hydrolysis during processing of the plant material. Glucose solution is a favourable extraction medium due to its ability to form extensive hydrogen bonding with glycosides present in madder, and displace them from the fibre. This new glucose method offers an efficient process that preserves these sensitive molecules and is a step-change in analysis of madder dyed textiles as it can provide further information about historical dye preparation and dyeing processes that current methods cannot. The method also efficiently extracts glycosides in artificially aged samples, making it applicable for museum textile artefacts. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Novel methods of time-resolved fluorescence data analysis for in-vivo tissue characterization: application to atherosclerosis.

    PubMed

    Jo, J A; Fang, Q; Papaioannou, T; Qiao, J H; Fishbein, M C; Dorafshar, A; Reil, T; Baker, D; Freischlag, J; Marcu, L

    2004-01-01

    This study investigates the ability of new analytical methods of time-resolved laser-induced fluorescence spectroscopy (TR-LIFS) data to characterize tissue in-vivo, such as the composition of atherosclerotic vulnerable plaques. A total of 73 TR-LIFS measurements were taken in-vivo from the aorta of 8 rabbits, and subsequently analyzed using the Laguerre deconvolution technique. The investigated spots were classified as normal aorta, thin or thick lesions, and lesions rich in either collagen or macrophages/foam-cells. Different linear and nonlinear classification algorithms (linear discriminant analysis, stepwise linear discriminant analysis, principal component analysis, and feedforward neural networks) were developed using spectral and TR features (ratios of intensity values and Laguerre expansion coefficients, respectively). Normal intima and thin lesions were discriminated from thick lesions (sensitivity >90%, specificity 100%) using only spectral features. However, both spectral and time-resolved features were necessary to discriminate thick lesions rich in collagen from thick lesions rich in foam cells (sensitivity >85%, specificity >93%), and thin lesions rich in foam cells from normal aorta and thin lesions rich in collagen (sensitivity >85%, specificity >94%). Based on these findings, we believe that TR-LIFS information derived from the Laguerre expansion coefficients can provide a valuable additional dimension for in-vivo tissue characterization.

  2. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Chromatographic-ICPMS methods for trace element and isotope analysis of water and biogenic calcite

    NASA Astrophysics Data System (ADS)

    Klinkhammer, G. P.; Haley, B. A.; McManus, J.; Palmer, M. R.

    2003-04-01

    ICP-MS is a powerful technique because of its sensitivity and speed of analysis. This is especially true for refractory elements that are notoriously difficult using TIMS and less energetic techniques. However, as ICP-MS instruments become more sensitive to elements of interest they also become more sensitive to interference. This becomes a pressing issue when analyzing samples with high total dissolved solids. This paper describes two trace element methods that overcome these problems by using chromatographic techniques to precondition samples prior to analysis by ICP-MS: separation of rare earth elements (REEs) from seawater using HPLC-ICPMS, and flow-through dissolution of foraminiferal calcite. Using HPLC in combination with ICP-MS it is possible to isolate the REEs from matrix, other transition elements, and each other. This method has been developed for small volume samples (5ml) making it possible to analyze sediment pore waters. As another example, subjecting foram shells to flow-through reagent addition followed by time-resolved analysis in the ICP-MS allows for systematic cleaning and dissolution of foram shells. This method provides information about the relationship between dissolution tendency and elemental composition. Flow-through is also amenable to automation thus yielding the high sample throughput required for paleoceanography, and produces a highly resolved elemental matrix that can be statistically analyzed.

  4. Polysialylated N-Glycans Identified in Human Serum Through Combined Developments in Sample Preparation, Separations and Electrospray ionization-mass spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kronewitter, Scott R.; Marginean, Ioan; Cox, Jonathan T.

    The N-glycan diversity of human serum glycoproteins, i.e. the human blood serum N-glycome, is complex due to the range of glycan structures potentially synthesizable by human glycosylation enzymes. The reported glycome, however, is limited by methods of sample preparation, available analytical platforms, e.g., based upon electrospray ionization-mass spectrometry (ESI-MS), and software tools for data analysis. In this report, several improvements have been implemented in sample preparation and analysis to extend ESI-MS glycan characterization and to provide an improved view of glycan diversity. Sample preparation improvements include acidified, microwave-accelerated, PNGase F N-glycan release, and sodium borohydride reduction were optimized to improvemore » quantitative yields and conserve the number of glycoforms detected. Two-stage desalting (during solid phase extraction and on the analytical column) increased the sensitivity by reducing analyte signal division between multiple reducing-end-forms or cation adducts. On-line separations were improved by using extended length graphitized carbon columns and adding TFA as an acid modifier to a formic acid/reversed phase gradient which provides additional resolving power and significantly improved desorption of both large and heavily sialylated glycans. To improve MS sensitivity and provide gentler ionization conditions at the source-MS interface, subambient pressure ionization with nanoelectrospray (SPIN) has been utilized. When method improvements are combined together with the Glycomics Quintavariate Informed Quantification (GlyQ-IQ) recently described1 these technologies demonstrate the ability to significantly extend glycan detection sensitivity and provide expanded glycan coverage. We demonstrate application of these advances in the context of the human serum glycome, and for which our initial observations include detection of a new class of heavily sialylated N-glycans, including polysialylated N-glycans.« less

  5. Financial analysis of cardiovascular wellness program provided to self-insured company from pharmaceutical care provider's perspective.

    PubMed

    Wilson, Justin B; Osterhaus, Matt C; Farris, Karen B; Doucette, William R; Currie, Jay D; Bullock, Tammy; Kumbera, Patty

    2005-01-01

    To perform a retrospective financial analysis on the implementation of a self-insured company's wellness program from the pharmaceutical care provider's perspective and conduct sensitivity analyses to estimate costs versus revenues for pharmacies without resident pharmacists, program implementation for a second employer, the second year of the program, and a range of pharmacist wages. Cost-benefit and sensitivity analyses. Self-insured employer with headquarters in Canton, N.C. 36 employees at facility in Clinton, Iowa. Pharmacist-provided cardiovascular wellness program. Costs and revenues collected from pharmacy records, including pharmacy purchasing records, billing records, and pharmacists' time estimates. All costs and revenues were calculated for the development and first year of the intervention program. Costs included initial and follow-up screening supplies, office supplies, screening/group presentation time, service provision time, documentation/preparation time, travel expenses, claims submission time, and administrative fees. Revenues included initial screening revenues, follow-up screening revenues, group session revenues, and Heart Smart program revenues. For the development and first year of Heart Smart, net benefit to the pharmacy (revenues minus costs) amounted to dollars 2,413. All sensitivity analyses showed a net benefit. For pharmacies without a resident pharmacist, the net benefit was dollars 106; for Heart Smart in a second employer, the net benefit was dollars 6,024; for the second year, the projected net benefit was dollars 6,844; factoring in a lower pharmacist salary, the net benefit was dollars 2,905; and for a higher pharmacist salary, the net benefit was dollars 1,265. For the development and first year of Heart Smart, the revenues of the wellness program in a self-insured company outweighed the costs.

  6. Pressure sensitivity of low permeability sandstones

    USGS Publications Warehouse

    Kilmer, N.H.; Morrow, N.R.; Pitman, Janet K.

    1987-01-01

    Detailed core analysis has been carried out on 32 tight sandstones with permeabilities ranging over four orders of magnitude (0.0002 to 4.8 mD at 5000 psi confining pressure). Relationships between gas permeability and net confining pressure were measured for cycles of loading and unloading. For some samples, permeabilities were measured both along and across bedding planes. Large variations in stress sensitivity of permeability were observed from one sample to another. The ratio of permeability at a nominal confining pressure of 500 psi to that at 5000 psi was used to define a stress sensitivity ratio. For a given sample, confining pressure vs permeability followed a linear log-log relationship, the slope of which provided an index of pressure sensitivity. This index, as obtained for first unloading data, was used in testing relationships between stress sensitivity and other measured rock properties. Pressure sensitivity tended to increase with increase in carbonate content and depth, and with decrease in porosity, permeability and sodium feldspar. However, scatter in these relationships increased as permeability decreased. Tests for correlations between pressure sensitivity and various linear combinations of variables are reported. Details of pore structure related to diagenetic changes appears to be of much greater significance to pressure sensitivity than mineral composition. ?? 1987.

  7. Diagnostic Performance of (18)F-Fluorodeoxyglucose in 162 Small Pulmonary Nodules Incidentally Detected in Subjects Without a History of Malignancy.

    PubMed

    Calcagni, Maria Lucia; Taralli, Silvia; Cardillo, Giuseppe; Graziano, Paolo; Ialongo, Pasquale; Mattoli, Maria Vittoria; Di Franco, Davide; Caldarella, Carmelo; Carleo, Francesco; Indovina, Luca; Giordano, Alessandro

    2016-04-01

    Solitary pulmonary nodule (SPN) still represents a diagnostic challenge. The aim of our study was to evaluate the diagnostic performance of (18)F-fluorodeoxyglucose positron emission tomography-computed tomography in one of the largest samples of small SPNs, incidentally detected in subjects without a history of malignancy (nonscreening population) and undetermined at computed tomography. One-hundred and sixty-two small (>0.8 to 1.5 cm) and, for comparison, 206 large nodules (>1.5 to 3 cm) were retrospectively evaluated. Diagnostic performance of (18)F-fluorodeoxyglucose visual analysis, receiver-operating characteristic (ROC) analysis for maximum standardized uptake value (SUVmax), and Bayesian analysis were assessed using histology or radiological follow-up as a golden standard. In 162 small nodules, (18)F-fluorodeoxyglucose visual and ROC analyses (SUVmax = 1.3) provided 72.6% and 77.4% sensitivity and 88.0% and 82.0% specificity, respectively. The prevalence of malignancy was 38%; Bayesian analysis provided 78.8% positive and 16.0% negative posttest probabilities of malignancy. In 206 large nodules (18)F-fluorodeoxyglucose visual and ROC analyses (SUVmax = 1.9) provided 89.5% and 85.1% sensitivity and 70.8% and 79.2% specificity, respectively. The prevalence of malignancy was 65%; Bayesian analysis provided 85.0% positive and 21.6% negative posttest probabilities of malignancy. In both groups, malignant nodules had a significant higher SUVmax (p < 0.0001) than benign nodules. Only in the small group, malignant nodules were significantly larger (p = 0.0054) than benign ones. (18)F-fluorodeoxyglucose can be clinically relevant to rule in and rule out malignancy in undetermined small SPNs, incidentally detected in nonscreening population with intermediate pretest probability of malignancy, as well as in larger ones. Visual analysis can be considered an optimal diagnostic criterion, adequately detecting a wide range of malignant nodules with different metabolic activity. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  8. Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.

    PubMed

    Price, W R; Olsen, R A; Hunter, J E

    1972-04-01

    A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.

  9. A methodology to estimate uncertainty for emission projections through sensitivity analysis.

    PubMed

    Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación

    2015-04-01

    Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.

  10. Highly sensitive index of sympathetic activity based on time-frequency spectral analysis of electrodermal activity.

    PubMed

    Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H

    2016-09-01

    Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time-varying spectral amplitudes in the frequency band 0.08-0.24 Hz were used as the index of sympathetic tone, termed TVSymp. TVSymp was found to be overall the most sensitive to the stimuli, as evidenced by a low coefficient of variation (0.54), and higher consistency (intra-class correlation, 0.96) and sensitivity (Youden's index > 0.75), area under the receiver operating characteristic (ROC) curve (>0.8, accuracy > 0.88) compared with time-domain and time-invariant spectral indices, including heart rate variability. Copyright © 2016 the American Physiological Society.

  11. Inaccurate Estimation of Disparities Due to Mischievous Responders: Several Suggestions to Assess Conclusions

    ERIC Educational Resources Information Center

    Robinson-Cimpian, Joseph P.

    2014-01-01

    This article introduces novel sensitivity-analysis procedures for investigating and reducing the bias that mischievous responders (i.e., youths who provide extreme, and potentially untruthful, responses to multiple questions) often introduce in adolescent disparity estimates based on data from self-administered questionnaires (SAQs). Mischievous…

  12. Global sensitivity analysis of a dynamic agroecosystem model under different irrigation treatments

    USDA-ARS?s Scientific Manuscript database

    Savings in consumptive use through limited or deficit irrigation in agriculture has become an increasingly viable source of additional water for places with high population growth such as the Colorado Front Range, USA. Crop models provide a mechanism to evaluate various management methods without pe...

  13. The mechanistic model, GoMDOM: Development , calibration and sensitivity analysis

    EPA Science Inventory

    This presentation will be in a series of Gulf Hypoxia modeling presentations which will be used to: 1) aid NOAA in informing scientific directions and funding decisions for their cooperators and 2) a Technical Review of all models will be provided to the Mississippi River Nutrie...

  14. Socioeconomic Stratification and Its Influences on Talent Development: Some Interdisciplinary Perspectives.

    ERIC Educational Resources Information Center

    Ambrose, Don

    2002-01-01

    In this analysis, socioeconomic barriers to talent development are explored from the vantage points of major thinkers and recent research findings in context-sensitive disciplines such as economics, sociology, and ethical philosophy. Insights drawn from these perspectives provide the basis for recommendations for educators of the gifted. (Contains…

  15. Using Simulation Module, PCLAB, for Steady State Disturbance Sensitivity Analysis in Process Control

    ERIC Educational Resources Information Center

    Ali, Emad; Idriss, Arimiyawo

    2009-01-01

    Recently, chemical engineering education moves towards utilizing simulation soft wares to enhance the learning process especially in the field of process control. These training simulators provide interactive learning through visualization and practicing which will bridge the gap between the theoretical abstraction of textbooks and the…

  16. Highly sensitive solids mass spectrometer uses inert-gas ion source

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Mass spectrometer provides a recorded analysis of solid material surfaces and bulk. A beam of high-energy inert-gas ions bombards the surface atoms of a sample and converts a percentage into an ionized vapor. The mass spectrum analyzer separates the vapor ionic constituents by mass-to-charge ratio.

  17. Lunar Exploration Architecture Level Key Drivers and Sensitivities

    NASA Technical Reports Server (NTRS)

    Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher

    2009-01-01

    Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.

  18. Monte Carlo sensitivity analysis of unknown parameters in hazardous materials transportation risk assessment.

    PubMed

    Pet-Armacost, J J; Sepulveda, J; Sakude, M

    1999-12-01

    The US Department of Transportation was interested in the risks associated with transporting Hydrazine in tanks with and without relief devices. Hydrazine is both highly toxic and flammable, as well as corrosive. Consequently, there was a conflict as to whether a relief device should be used or not. Data were not available on the impact of relief devices on release probabilities or the impact of Hydrazine on the likelihood of fires and explosions. In this paper, a Monte Carlo sensitivity analysis of the unknown parameters was used to assess the risks associated with highway transport of Hydrazine. To help determine whether or not relief devices should be used, fault trees and event trees were used to model the sequences of events that could lead to adverse consequences during transport of Hydrazine. The event probabilities in the event trees were derived as functions of the parameters whose effects were not known. The impacts of these parameters on the risk of toxic exposures, fires, and explosions were analyzed through a Monte Carlo sensitivity analysis and analyzed statistically through an analysis of variance. The analysis allowed the determination of which of the unknown parameters had a significant impact on the risks. It also provided the necessary support to a critical transportation decision even though the values of several key parameters were not known.

  19. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  20. Spin-exchange relaxation-free magnetometer with nearly parallel pump and probe beams

    DOE PAGES

    Karaulanov, Todor; Savukov, Igor; Kim, Young Jin

    2016-03-22

    We constructed a spin-exchange relaxation-free (SERF) magnetometer with a small angle between the pump and probe beams facilitating a multi-channel design with a flat pancake cell. This configuration provides almost complete overlap of the beams in the cell, and prevents the pump beam from entering the probe detection channel. By coupling the lasers in multi-mode fibers, without an optical isolator or field modulation, we demonstrate a sensitivity of 10 fTmore » $$/\\sqrt{\\text{Hz}}$$ for frequencies between 10 Hz and 100 Hz. In addition to the experimental study of sensitivity, we present a theoretical analysis of SERF magnetometer response to magnetic fields for small-angle and parallel-beam configurations, and show that at optimal DC offset fields the magnetometer response is comparable to that in the orthogonal-beam configuration. Based on the analysis, we also derive fundamental and probe-limited sensitivities for the arbitrary non-orthogonal geometry. The expected practical and fundamental sensitivities are of the same order as those in the orthogonal geometry. As a result, we anticipate that our design will be useful for magnetoencephalography (MEG) and magnetocardiography (MCG) applications.« less

  1. [Cost-effectiveness analysis of an alternative for the provision of primary health care for beneficiaries of Seguro Popular in Mexico].

    PubMed

    Figueroa-Lara, Alejandro; González-Block, Miguel A

    2016-01-01

    To estimate the cost-effectiveness ratio of public and private health care providers funded by Seguro Popular. A pilot contracting primary care health care scheme in the state of Hidalgo, Mexico, was evaluated through a population survey to assess quality of care and detection decreased of vision. Costs were assessed from the payer perspective using institutional sources.The alternatives analyzed were a private provider with capitated and performance-based payment modalities, and a public provider funded through budget subsidies. Sensitivity analysis was performed using Monte Carlo simulations. The private provider is dominant in the quality and cost-effective detection of decreased vision. Strategic purchasing of private providers of primary care has shown promising results as an alternative to improving quality of health services and reducing costs.

  2. Cost Analysis of Fluconazole Prophylaxis for Prevention of Neonatal Invasive Candidiasis.

    PubMed

    Swanson, Jonathan R; Vergales, Jeff; Kaufman, David A; Sinkin, Robert A

    2016-05-01

    Fluconazole prophylaxis (FP) in premature infants is well studied and has been shown to decrease invasive candidiasis (ICs). IC in neonates has significant financial costs; determining the cost-benefit of FP may provide additional justification for targeting high-risk neonates. We aimed to determine the IC rate in premature infants at which FP is cost-beneficial. A decision tree cost-analysis model using cost of FP related to costs associated with IC was used. We searched PubMed for all papers that used intravenous FP and reported rates of IC in very low birth weight neonates. Average IC rates in those who received FP (2.0%; range, 0-6.1%) and in those who did not receive FP (9.2%; range, 0-20.5%) were used. Incremental hospital costs because of IC and for FP were retrieved from the literature. Sensitivity analysis was performed to determine the incremental cost of FP across the range of published IC rates. The average cost per patient attributed to IC in patients receiving FP was $785 versus $2617 in those not receiving FP. Sensitivity analysis demonstrates the rate of IC would need to be <2.8% for FP to lose its cost-benefit. In Monte Carlo simulation, targeting infants <1000 g would lead to $50,304,333 in cost savings per year in the United States. FP provides a cost-advantage across most IC rates seen in the youngest premature infants. Using a rate of 2.8% for their individual high-risk neonatal intensive care unit patients, providers can determine if FP is cost-beneficial in determining for whom to provide IC prophylaxis.

  3. Regional prioritisation of flood risk in mountainous areas

    NASA Astrophysics Data System (ADS)

    Rogelis, M. C.; Werner, M.; Obregón, N.; Wright, G.

    2015-07-01

    A regional analysis of flood risk was carried out in the mountainous area surrounding the city of Bogotá (Colombia). Vulnerability at regional level was assessed on the basis of a principal component analysis carried out with variables recognised in literature to contribute to vulnerability; using watersheds as the unit of analysis. The area exposed was obtained from a simplified flood analysis at regional level to provide a mask where vulnerability variables were extracted. The vulnerability indicator obtained from the principal component analysis was combined with an existing susceptibility indicator, thus providing an index that allows the watersheds to be prioritised in support of flood risk management at regional level. Results show that the components of vulnerability can be expressed in terms of four constituent indicators; socio-economic fragility, which is composed of demography and lack of well-being; lack of resilience, which is composed of education, preparedness and response capacity, rescue capacity, social cohesion and participation; and physical exposure is composed of exposed infrastructure and exposed population. A sensitivity analysis shows that the classification of vulnerability is robust for watersheds with low and high values of the vulnerability indicator, while some watersheds with intermediate values of the indicator are sensitive to shifting between medium and high vulnerability. The complex interaction between vulnerability and hazard is evidenced in the case study. Environmental degradation in vulnerable watersheds shows the influence that vulnerability exerts on hazard and vice versa, thus establishing a cycle that builds up risk conditions.

  4. Which physical examination tests provide clinicians with the most value when examining the shoulder? Update of a systematic review with meta-analysis of individual tests.

    PubMed

    Hegedus, Eric J; Goode, Adam P; Cook, Chad E; Michener, Lori; Myer, Cortney A; Myer, Daniel M; Wright, Alexis A

    2012-11-01

    To update our previously published systematic review and meta-analysis by subjecting the literature on shoulder physical examination (ShPE) to careful analysis in order to determine each tests clinical utility. This review is an update of previous work, therefore the terms in the Medline and CINAHL search strategies remained the same with the exception that the search was confined to the dates November, 2006 through to February, 2012. The previous study dates were 1966 - October, 2006. Further, the original search was expanded, without date restrictions, to include two new databases: EMBASE and the Cochrane Library. The Quality Assessment of Diagnostic Accuracy Studies, version 2 (QUADAS 2) tool was used to critique the quality of each new paper. Where appropriate, data from the prior review and this review were combined to perform meta-analysis using the updated hierarchical summary receiver operating characteristic and bivariate models. Since the publication of the 2008 review, 32 additional studies were identified and critiqued. For subacromial impingement, the meta-analysis revealed that the pooled sensitivity and specificity for the Neer test was 72% and 60%, respectively, for the Hawkins-Kennedy test was 79% and 59%, respectively, and for the painful arc was 53% and 76%, respectively. Also from the meta-analysis, regarding superior labral anterior to posterior (SLAP) tears, the test with the best sensitivity (52%) was the relocation test; the test with the best specificity (95%) was Yergason's test; and the test with the best positive likelihood ratio (2.81) was the compression-rotation test. Regarding new (to this series of reviews) ShPE tests, where meta-analysis was not possible because of lack of sufficient studies or heterogeneity between studies, there are some individual tests that warrant further investigation. A highly specific test (specificity >80%, LR+ ≥ 5.0) from a low bias study is the passive distraction test for a SLAP lesion. This test may rule in a SLAP lesion when positive. A sensitive test (sensitivity >80%, LR- ≤ 0.20) of note is the shoulder shrug sign, for stiffness-related disorders (osteoarthritis and adhesive capsulitis) as well as rotator cuff tendinopathy. There are six additional tests with higher sensitivities, specificities, or both but caution is urged since all of these tests have been studied only once and more than one ShPE test (ie, active compression, biceps load II) has been introduced with great diagnostic statistics only to have further research fail to replicate the results of the original authors. The belly-off and modified belly press tests for subscapularis tendinopathy, bony apprehension test for bony instability, olecranon-manubrium percussion test for bony abnormality, passive compression for a SLAP lesion, and the lateral Jobe test for rotator cuff tear give reason for optimism since they demonstrated both high sensitivities and specificities reported in low bias studies. Finally, one additional test was studied in two separate papers. The dynamic labral shear may be sensitive for SLAP lesions but, when modified, be diagnostic of labral tears generally. Based on data from the original 2008 review and this update, the use of any single ShPE test to make a pathognomonic diagnosis cannot be unequivocally recommended. There exist some promising tests but their properties must be confirmed in more than one study. Combinations of ShPE tests provide better accuracy, but marginally so. These findings seem to provide support for stressing a comprehensive clinical examination including history and physical examination. However, there is a great need for large, prospective, well-designed studies that examine the diagnostic accuracy of the many aspects of the clinical examination and what combinations of these aspects are useful in differentially diagnosing pathologies of the shoulder.

  5. Investigating the Water Vapor Component of the Greenhouse Effect from the Atmospheric InfraRed Sounder (AIRS)

    NASA Astrophysics Data System (ADS)

    Gambacorta, A.; Barnet, C.; Sun, F.; Goldberg, M.

    2009-12-01

    We investigate the water vapor component of the greenhouse effect in the tropical region using data from the Atmospheric InfraRed Sounder (AIRS). Differently from previous studies who have relayed on the assumption of constant lapse rate and performed coarse layer or total column sensitivity analysis, we resort to AIRS high vertical resolution to measure the greenhouse effect sensitivity to water vapor along the vertical column. We employ a "partial radiative perturbation" methodology and discriminate between two different dynamic regimes, convective and non-convective. This analysis provides useful insights on the occurrence and strength of the water vapor greenhouse effect and its sensitivity to spatial variations of surface temperature. By comparison with the clear-sky computation conducted in previous works, we attempt to confine an estimate for the cloud contribution to the greenhouse effect. Our results compare well with the current literature, falling in the upper range of the existing global circulation model estimates. We value the results of this analysis as a useful reference to help discriminate among model simulations and improve our capability to make predictions about the future of our climate.

  6. Resilience through adaptation

    PubMed Central

    van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap

    2017-01-01

    Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372

  7. Resilience through adaptation.

    PubMed

    Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap

    2017-01-01

    Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.

  8. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. A Quantitative Study of Oxygen as a Metabolic Regulator

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; LaManna, Joseph C.; Cabrera, Marco E.

    1999-01-01

    An acute reduction in oxygen (O2) delivery to a tissue is generally associated with a decrease in phosphocreatine, increases in ADP, NADH/NAD, and inorganic phosphate, increased rates of glycolysis and lactate production, and reduced rates of pyruvate and fatty acid oxidation. However, given the complexity of the human bioenergetic system and its components, it is difficult to determine quantitatively how cellular metabolic processes interact to maintain ATP homeostasis during stress (e.g., hypoxia, ischemia, and exercise). Of special interest is the determination of mechanisms relating tissue oxygenation to observed metabolic responses at the tissue, organ, and whole body levels and the quantification of how changes in tissue O2 availability affect the pathways of ATP synthesis and the metabolites that control these pathways. In this study, we extend a previously developed mathematical model of human bioenergetics to provide a physicochemical framework that permits quantitative understanding of O2 as a metabolic regulator. Specifically, the enhancement permits studying the effects of variations in tissue oxygenation and in parameters controlling the rate of cellular respiration on glycolysis, lactate production, and pyruvate oxidation. The whole body is described as a bioenergetic system consisting of metabolically distinct tissue/organ subsystems that exchange materials with the blood. In order to study the dynamic response of each subsystem to stimuli, we solve the ordinary differential equations describing the temporal evolution of metabolite levels, given the initial concentrations. The solver used in the present study is the packaged code LSODE, as implemented in the NASA Lewis kinetics and sensitivity analysis code, LSENS. A major advantage of LSENS is the efficient procedures supporting systematic sensitivity analysis, which provides the basic methods for studying parameter sensitivities (i.e., changes in model behavior due to parameter variation). Sensitivity analysis establishes relationships between model predictions and problem parameters (i.e., initial concentrations, rate coefficients, etc). It helps determine the effects of uncertainties or changes in these input parameters on the predictions, which ultimately are compared with experimental observations in order to validate the model. Sensitivity analysis can identify parameters that must be determined accurately because of their large effect on the model predictions and parameters that need not be known with great precision because they have little or no effect on the solution. This capability may prove to be important in optimizing the design of experiments, thereby reducing the use of animals. This approach can be applied to study the metabolic effects of reduced oxygen delivery to cardiac muscle due to local myocardial ischemia and the effects of acute hypoxia on brain metabolism. Other important applications of sensitivity analysis include identification of quantitatively relevant pathways and biochemical species within an overall mechanism, when examining the effects of a genetic anomaly or pathological state on energetic system components and whole system behavior.

  10. Assessment of Spatial Transferability of Process-Based Hydrological Model Parameters in Two Neighboring Catchments in the Himalayan Region

    NASA Astrophysics Data System (ADS)

    Nepal, S.

    2016-12-01

    The spatial transferability of the model parameters of the process-oriented distributed J2000 hydrological model was investigated in two glaciated sub-catchments of the Koshi river basin in eastern Nepal. The basins had a high degree of similarity with respect to their static landscape features. The model was first calibrated (1986-1991) and validated (1992-1997) in the Dudh Koshi sub-catchment. The calibrated and validated model parameters were then transferred to the nearby Tamor catchment (2001-2009). A sensitivity and uncertainty analysis was carried out for both sub-catchments to discover the sensitivity range of the parameters in the two catchments. The model represented the overall hydrograph well in both sub-catchments, including baseflow and medium range flows (rising and recession limbs). The efficiency results according to both Nash-Sutcliffe and the coefficient of determination was above 0.84 in both cases. The sensitivity analysis showed that the same parameter was most sensitive for Nash-Sutcliffe (ENS) and Log Nash-Sutcliffe (LNS) efficiencies in both catchments. However, there were some differences in sensitivity to ENS and LNS for moderate and low sensitive parameters, although the majority (13 out of 16 for ENS and 16 out of 16 for LNS) had a sensitivity response in a similar range. A generalized likelihood uncertainty estimation (GLUE) result suggest that most of the time the observed runoff is within the parameter uncertainty range, although occasionally the values lie outside the uncertainty range, especially during flood peaks and more in the Tamor. This may be due to the limited input data resulting from the small number of precipitation stations and lack of representative stations in high-altitude areas, as well as to model structural uncertainty. The results indicate that transfer of the J2000 parameters to a neighboring catchment in the Himalayan region with similar physiographic landscape characteristics is viable. This indicates the possibility of applying process-based J2000 model be to the ungauged catchments in the Himalayan region, which could provide important insights into the hydrological system dynamics and provide much needed information to support water resources planning and management.

  11. Sampling and analysis of airborne resin acids and solvent-soluble material derived from heated colophony (rosin) flux: a method to quantify exposure to sensitizing compounds liberated during electronics soldering.

    PubMed

    Smith, P A; Son, P S; Callaghan, P M; Jederberg, W W; Kuhlmann, K; Still, K R

    1996-07-17

    Components of colophony (rosin) resin acids are sensitizers through dermal and pulmonary exposure to heated and unheated material. Significant work in the literature identifies specific resin acids and their oxidation products as sensitizers. Pulmonary exposure to colophony sensitizers has been estimated indirectly through formaldehyde exposure. To assess pulmonary sensitization from airborne resin acids, direct measurement is desired, as the degree to which aldehyde exposure correlates with that of resin acids during colophony heating is undefined. Any analytical method proposed should be applicable to a range of compounds and should also identify specific compounds present in a breathing zone sample. This work adapts OSHA Sampling and Analytical Method 58, which is designed to provide airborne concentration data for coal tar pitch volatile solids by air filtration through a glass fiber filter, solvent extraction of the filter, and gravimetric analysis of the non-volatile extract residue. In addition to data regarding total soluble material captured, a portion of the extract may be subjected to compound-specific analysis. Levels of soluble solids found during personal breathing zone sampling during electronics soldering in a Naval Aviation Depot ranged from below the "reliable quantitation limit" reported in the method to 7.98 mg/m3. Colophony-spiked filters analyzed in accordance with the method (modified) produced a limit of detection for total solvent-soluble colophony solids of 10 micrograms/filter. High performance liquid chromatography was used to identify abietic acid present in a breathing zone sample.

  12. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  13. Diagnostic Accuracy of Memory Measures in Alzheimer's Dementia and Mild Cognitive Impairment: a Systematic Review and Meta-Analysis.

    PubMed

    Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H

    2017-12-01

    With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.

  14. Capacitive chemical sensor

    DOEpatents

    Manginell, Ronald P; Moorman, Matthew W; Wheeler, David R

    2014-05-27

    A microfabricated capacitive chemical sensor can be used as an autonomous chemical sensor or as an analyte-sensitive chemical preconcentrator in a larger microanalytical system. The capacitive chemical sensor detects changes in sensing film dielectric properties, such as the dielectric constant, conductivity, or dimensionality. These changes result from the interaction of a target analyte with the sensing film. This capability provides a low-power, self-heating chemical sensor suitable for remote and unattended sensing applications. The capacitive chemical sensor also enables a smart, analyte-sensitive chemical preconcentrator. After sorption of the sample by the sensing film, the film can be rapidly heated to release the sample for further analysis. Therefore, the capacitive chemical sensor can optimize the sample collection time prior to release to enable the rapid and accurate analysis of analytes by a microanalytical system.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hilfiker, James N.; Stadermann, Michael; Sun, Jianing

    It is a well-known challenge to determine refractive index (n) from ultra-thin films where the thickness is less than about 10 nm. In this paper, we discovered an interesting exception to this issue while characterizing spectroscopic ellipsometry (SE) data from isotropic, free-standing polymer films. Ellipsometry analysis shows that both thickness and refractive index can be independently determined for free-standing films as thin as 5 nm. Simulations further confirm an orthogonal separation between thickness and index effects on the experimental SE data. Effects of angle of incidence and wavelength on the data and sensitivity are discussed. Finally, while others have demonstratedmore » methods to determine refractive index from ultra-thin films, our analysis provides the first results to demonstrate high-sensitivity to the refractive index from ultra-thin layers.« less

  16. Prospective trial evaluating the sensitivity and specificity of 3,4-dihydroxy-6-[18F]-fluoro-L-phenylalanine (18F-DOPA) PET and MRI in patients with recurrent gliomas.

    PubMed

    Youland, Ryan S; Pafundi, Deanna H; Brinkmann, Debra H; Lowe, Val J; Morris, Jonathan M; Kemp, Bradley J; Hunt, Christopher H; Giannini, Caterina; Parney, Ian F; Laack, Nadia N

    2018-05-01

    Treatment-related changes can be difficult to differentiate from progressive glioma using MRI with contrast (CE). The purpose of this study is to compare the sensitivity and specificity of 18F-DOPA-PET and MRI in patients with recurrent glioma. Thirteen patients with MRI findings suspicious for recurrent glioma were prospectively enrolled and underwent 18F-DOPA-PET and MRI for neurosurgical planning. Stereotactic biopsies were obtained from regions of concordant and discordant PET and MRI CE, all within regions of T2/FLAIR signal hyperintensity. The sensitivity and specificity of 18F-DOPA-PET and CE were calculated based on histopathologic analysis. Receiver operating characteristic curve analysis revealed optimal tumor to normal (T/N) and SUVmax thresholds. In the 37 specimens obtained, 51% exhibited MRI contrast enhancement (M+) and 78% demonstrated 18F-DOPA-PET avidity (P+). Imaging characteristics included M-P- in 16%, M-P+ in 32%, M+P+ in 46% and M+P- in 5%. Histopathologic review of biopsies revealed grade II components in 16%, grade III in 43%, grade IV in 30% and no tumor in 11%. MRI CE sensitivity for recurrent tumor was 52% and specificity was 50%. PET sensitivity for tumor was 82% and specificity was 50%. A T/N threshold > 2.0 altered sensitivity to 76% and specificity to 100% and SUVmax > 1.36 improved sensitivity and specificity to 94 and 75%, respectively. 18F-DOPA-PET can provide increased sensitivity and specificity compared with MRI CE for visualizing the spatial distribution of recurrent gliomas. Future studies will incorporate 18F-DOPA-PET into re-irradiation target volume delineation for RT planning.

  17. Advanced Imaging Adds Little Value in the Diagnosis of Femoroacetabular Impingement Syndrome.

    PubMed

    Cunningham, Daniel J; Paranjape, Chinmay S; Harris, Joshua D; Nho, Shane J; Olson, Steven A; Mather, Richard C

    2017-12-20

    Femoroacetabular impingement (FAI) syndrome is an increasingly recognized source of hip pain and disability in young active adults. In order to confirm the diagnosis, providers often supplement physical examination maneuvers and radiographs with intra-articular hip injection, magnetic resonance imaging (MRI), or magnetic resonance arthrography (MRA). Since diagnostic imaging represents the fastest rising cost segment in U.S. health care, there is a need for value-driven diagnostic algorithms. The purpose of this study was to identify cost-effective diagnostic strategies for symptomatic FAI, comparing history and physical examination (H&P) alone (utilizing only radiographic imaging) with supplementation with injection, MRI, or MRA. A simple-chain decision model run as a cost-utility analysis was constructed to assess the diagnostic value of the MRI, MRA, and injection that are added to the H&P and radiographs in diagnosing symptomatic FAI. Strategies were compared using the incremental cost-utility ratio (ICUR) with a willingness to pay (WTP) of $100,000/QALY (quality-adjusted life year). Direct costs were measured using the Humana database (PearlDiver). Diagnostic test accuracy, treatment outcome probabilities, and utilities were extracted from the literature. H&P with and without supplemental diagnostic injection was the most cost-effective. Adjunct injection was preferred in situations with a WTP of >$60,000/QALY, low examination sensitivity, and high FAI prevalence. With low disease prevalence and low examination sensitivity, as may occur in a general practitioner's office, H&P with injection was the most cost-effective strategy, whereas in the reciprocal scenario, H&P with injection was only favored at exceptionally high WTP (∼$990,000). H&P and radiographs with supplemental diagnostic injection are preferred over advanced imaging, even with reasonable deviations from published values of disease prevalence, test sensitivity, and test specificity. Providers with low examination sensitivity in situations with low disease prevalence may benefit most from including injection in their diagnostic strategy. Providers with high examination sensitivity in situations with high disease prevalence may not benefit from including injection in their diagnostic strategy. Providers should not routinely rely on advanced imaging to diagnose FAI syndrome, although advanced imaging may have a role in challenging clinical scenarios. Economic and Decision Analysis Level IV. See Instructions for Authors for a complete description of levels of evidence.

  18. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhao; Gao, Kun; Chen, Jian

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using themore » error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.« less

  19. Comparison between evaporative light scattering detection and charged aerosol detection for the analysis of saikosaponins.

    PubMed

    Eom, Han Young; Park, So-Young; Kim, Min Kyung; Suh, Joon Hyuk; Yeom, Hyesun; Min, Jung Won; Kim, Unyong; Lee, Jeongmi; Youm, Jeong-Rok; Han, Sang Beom

    2010-06-25

    Saikosaponins are triterpene saponins derived from the roots of Bupleurum falcatum L. (Umbelliferae), which has been traditionally used to treat fever, inflammation, liver diseases, and nephritis. It is difficult to analyze saikosaponins using HPLC-UV due to the lack of chromophores. Therefore, evaporative light scattering detection (ELSD) is used as a valuable alternative to UV detection. More recently, a charged aerosol detection (CAD) method has been developed to improve the sensitivity and reproducibility of ELSD. In this study, we compared CAD and ELSD methods in the simultaneous analysis of 10 saikosaponins, including saikosaponins-A, -B(1), -B(2), -B(3), -B(4), -C, -D, -G, -H and -I. A mixture of the 10 saikosaponins was injected into the Ascentis Express C18 column (100 mm x 4.6 mm, 2.7 microm) with gradient elution and detection with CAD and ELSD by splitting. We examined various factors that could affect the sensitivity of the detectors including various concentrations of additives, pH and flow rate of the mobile phase, purity of nitrogen gas and the CAD range. The sensitivity was determined based on the signal-to-noise ratio. The best sensitivity for CAD was achieved with 0.1 mM ammonium acetate at pH 4.0 in the mobile phase with a flow rate of 1.0 mL/min, and the CAD range at 100 pA, whereas that for ELSD was achieved with 0.01% acetic acid in the mobile phase with a flow rate at 0.8 mL/min. The purity of the nitrogen gas had only minor effects on the sensitivities of both detectors. Finally, the sensitivity for CAD was two to six times better than that of ELSD. Taken together, these results suggest that CAD provides a more sensitive analysis of the 10 saikosaponins than does ELSD. Copyright 2010 Elsevier B.V. All rights reserved.

  20. [The role of endotracheal aspirate culture in the diagnosis of ventilator-associated pneumonia: a meta analysis].

    PubMed

    Wang, Fei; He, Bei

    2013-01-01

    To investigate the role of endotracheal aspirate (EA) culture in the diagnosis and antibiotic management in ventilator-associated pneumonia (VAP). We searched CNKI, Wanfang, PUBMED and EMBASE databases published from January 1990 to December 2011, to find relevant literatures on VAP microbiological diagnostic techniques including EA and bronchoalveolar lavage (BALF). The following key words were used: ventilator associated pneumonia, diagnosis and adult. Meta-analysis was performed and the sensitivity and specificity of EA on VAP diagnosis were calculated. Our literature search identified 1665 potential articles, 8 of which fulfilled our selection criteria including 561 patients with paired cultures. Using BALF quantitative culture as reference standard, the sensitivity and specificity of EA were 72% and 71%. When considering quantitative culture of EA only, the sensitivity and specificity improved to 90% and 65%, while the positive and the negative predictive values were 68% and 89% respectively. However, the sensitivity and specificity of semi-quantitative culture of EA were only 50% and 80%, with a positive predictive value of 77% and a negative predictive value of 58% respectively. EA culture had relatively poor sensitivity and specificity, although quantitative culture of EA only could improve the sensitivity. Initiating therapy on the basis of EA quantitative culture may still result in excessive antibiotic usage. Our data suggested that EA could provide some information for clinical decision but could not replace the role of BALF quantitative culture in VAP diagnosis.

  1. Quantitative sensory testing response patterns to capsaicin- and ultraviolet-B–induced local skin hypersensitization in healthy subjects: a machine-learned analysis

    PubMed Central

    Lötsch, Jörn; Geisslinger, Gerd; Heinemann, Sarah; Lerch, Florian; Oertel, Bruno G.; Ultsch, Alfred

    2018-01-01

    Abstract The comprehensive assessment of pain-related human phenotypes requires combinations of nociceptive measures that produce complex high-dimensional data, posing challenges to bioinformatic analysis. In this study, we assessed established experimental models of heat hyperalgesia of the skin, consisting of local ultraviolet-B (UV-B) irradiation or capsaicin application, in 82 healthy subjects using a variety of noxious stimuli. We extended the original heat stimulation by applying cold and mechanical stimuli and assessing the hypersensitization effects with a clinically established quantitative sensory testing (QST) battery (German Research Network on Neuropathic Pain). This study provided a 246 × 10-sized data matrix (82 subjects assessed at baseline, following UV-B application, and following capsaicin application) with respect to 10 QST parameters, which we analyzed using machine-learning techniques. We observed statistically significant effects of the hypersensitization treatments in 9 different QST parameters. Supervised machine-learned analysis implemented as random forests followed by ABC analysis pointed to heat pain thresholds as the most relevantly affected QST parameter. However, decision tree analysis indicated that UV-B additionally modulated sensitivity to cold. Unsupervised machine-learning techniques, implemented as emergent self-organizing maps, hinted at subgroups responding to topical application of capsaicin. The distinction among subgroups was based on sensitivity to pressure pain, which could be attributed to sex differences, with women being more sensitive than men. Thus, while UV-B and capsaicin share a major component of heat pain sensitization, they differ in their effects on QST parameter patterns in healthy subjects, suggesting a lack of redundancy between these models. PMID:28700537

  2. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    PubMed

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.

  3. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    PubMed Central

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096

  4. Sensitivity of land surface modeling to parameters: An uncertainty quantification method applied to the Community Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, D. M.; Mei, R.; Mao, J.; Hoffman, F. M.; Kumar, J.

    2015-12-01

    Uncertainties in land parameters could have important impacts on simulated water and energy fluxes and land surface states, which will consequently affect atmospheric and biogeochemical processes. Therefore, quantification of such parameter uncertainties using a land surface model is the first step towards better understanding of predictive uncertainty in Earth system models. In this study, we applied a random-sampling, high-dimensional model representation (RS-HDMR) method to analyze the sensitivity of simulated photosynthesis, surface energy fluxes and surface hydrological components to selected land parameters in version 4.5 of the Community Land Model (CLM4.5). Because of the large computational expense of conducting ensembles of global gridded model simulations, we used the results of a previous cluster analysis to select one thousand representative land grid cells for simulation. Plant functional type (PFT)-specific uniform prior ranges for land parameters were determined using expert opinion and literature survey, and samples were generated with a quasi-Monte Carlo approach-Sobol sequence. Preliminary analysis of 1024 simulations suggested that four PFT-dependent parameters (including slope of the conductance-photosynthesis relationship, specific leaf area at canopy top, leaf C:N ratio and fraction of leaf N in RuBisco) are the dominant sensitive parameters for photosynthesis, surface energy and water fluxes across most PFTs, but with varying importance rankings. On the other hand, for surface ans sub-surface runoff, PFT-independent parameters, such as the depth-dependent decay factors for runoff, play more important roles than the previous four PFT-dependent parameters. Further analysis by conditioning the results on different seasons and years are being conducted to provide guidance on how climate variability and change might affect such sensitivity. This is the first step toward coupled simulations including biogeochemical processes, atmospheric processes or both to determine the full range of sensitivity of Earth system modeling to land-surface parameters. This can facilitate sampling strategies in measurement campaigns targeted at reduction of climate modeling uncertainties and can also provide guidance on land parameter calibration for simulation optimization.

  5. Evaluating the Minimal Specimens From Endoscopic Ultrasound-Guided Fine-Needle Aspiration in Pancreatic Masses

    PubMed Central

    Park, Joo Kyung; Kang, Ki Joo; Oh, Cho Rong; Lee, Jong Kyun; Lee, Kyu Taek; Jang, Kee Taek; Park, Sang-Mo; Lee, Kwang Hyuck

    2016-01-01

    Abstract Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has become one of the most useful diagnostic modalities for the diagnosis of pancreatic mass. The aim of this study was to investigate the role of analyzing the minimal specimens obtained by EUS-FNA for the diagnosis of solid masses of pancreas. This study consisted of retrospective and prospective analyses. The retrospective study was performed on 116 patients who underwent EUS-FNA of solid masses for cytological smear, histological analysis, and combined analysis including immunohistochemical (IHC) staining. In the prospective study, 79 patients were enrolled to evaluate the quality and accuracy of EUS-FNA histological analysis and feasibility of IHC staining. The final diagnoses of all patients included pancreatic cancer (n = 126), nonpancreatic cancer (n = 21), other neoplasm (n = 27), and benign lesions (n = 21). In our retrospective study, the combined analysis was more sensitive than cytological analysis alone (P < 0.01). The overall sensitivity of cytology, histology, and combined analysis was 69.8%, 67.2%, and 81.8%, respectively. In the prospective analysis, 64.2% of all punctures were helpful for determining the diagnosis and 40.7% provided sufficient tissue for IHC staining. Histological analysis was helpful for diagnosis in 74.7% of patients. IHC staining was necessary for a definite diagnosis in 11.4% of patients, especially in the cases of nonmalignant pancreatic mass. Histological analysis and IHC study of EUS-FNA specimens was useful for the accurate diagnosis of pancreatic and peripancreatic lesions. Combined analysis showed significantly higher sensitivity than cytology alone because IHC staining was helpful for a diagnosis in some patients. PMID:27227937

  6. Antiresonant reflecting guidance mechanism in hollow-core fiber for gas pressure sensing.

    PubMed

    Hou, Maoxiang; Zhu, Feng; Wang, Ying; Wang, Yiping; Liao, Changrui; Liu, Shen; Lu, Peixiang

    2016-11-28

    A gas pressure sensor based on an antiresonant reflecting guidance mechanism in a hollow-core fiber (HCF) with an open microchannel is experimentally demonstrated for gas pressure sensing. The microchannel was created on the ring cladding of the HCF by femtosecond laser drilling to provide an air-core pressure equivalent to the external environment. The HCF cladding functions as an antiresonant reflecting waveguide, which induces sharp periodic lossy dips in the transmission spectrum. The proposed sensor exhibits a high pressure sensitivity of 3.592 nm/MPa and a low temperature cross-sensitivity of 7.5 kPa/°C. Theoretical analysis indicates that the observed high gas pressure sensitivity originates from the pressure induced refractive index change of the air in the hollow-core. The good operation durability and fabrication simplicity make the device an attractive candidate for reliable and highly sensitive gas pressure measurement in harsh environments.

  7. Structural Sensitivity of a Prokaryotic Pentameric Ligand-gated Ion Channel to Its Membrane Environment*

    PubMed Central

    Labriola, Jonathan M.; Pandhare, Akash; Jansen, Michaela; Blanton, Michael P.; Corringer, Pierre-Jean; Baenziger, John E.

    2013-01-01

    Although the activity of the nicotinic acetylcholine receptor (nAChR) is exquisitely sensitive to its membrane environment, the underlying mechanisms remain poorly defined. The homologous prokaryotic pentameric ligand-gated ion channel, Gloebacter ligand-gated ion channel (GLIC), represents an excellent model for probing the molecular basis of nAChR sensitivity because of its high structural homology, relative ease of expression, and amenability to crystallographic analysis. We show here that membrane-reconstituted GLIC exhibits structural and biophysical properties similar to those of the membrane-reconstituted nAChR, although GLIC is substantially more thermally stable. GLIC, however, does not possess the same exquisite lipid sensitivity. In particular, GLIC does not exhibit the same propensity to adopt an uncoupled conformation where agonist binding is uncoupled from channel gating. Structural comparisons provide insight into the chemical features that may predispose the nAChR to the formation of an uncoupled state. PMID:23463505

  8. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  9. Heart rate variability and pain: associations of two interrelated homeostatic processes.

    PubMed

    Appelhans, Bradley M; Luecken, Linda J

    2008-02-01

    Between-person variability in pain sensitivity remains poorly understood. Given a conceptualization of pain as a homeostatic emotion, we hypothesized inverse associations between measures of resting heart rate variability (HRV), an index of autonomic regulation of heart rate that has been linked to emotionality, and sensitivity to subsequently administered thermal pain. Resting electrocardiography was collected, and frequency-domain measures of HRV were derived through spectral analysis. Fifty-nine right-handed participants provided ratings of pain intensity and unpleasantness following exposure to 4 degrees C thermal pain stimulation, and indicated their thresholds for barely noticeable and moderate pain during three exposures to decreasing temperature. Greater low-frequency HRV was associated with lower ratings of 4 degrees C pain unpleasantness and higher thresholds for barely noticeable and moderate pain. High-frequency HRV was unrelated to measures of pain sensitivity. Findings suggest pain sensitivity is influenced by characteristics of a central homeostatic system also involved in emotion.

  10. Exploring the Relationship between Reward and Punishment Sensitivity and Gambling Disorder in a Clinical Sample: A Path Modeling Analysis.

    PubMed

    Jiménez-Murcia, Susana; Fernández-Aranda, Fernando; Mestre-Bach, Gemma; Granero, Roser; Tárrega, Salomé; Torrubia, Rafael; Aymamí, Neus; Gómez-Peña, Mónica; Soriano-Mas, Carles; Steward, Trevor; Moragas, Laura; Baño, Marta; Del Pino-Gutiérrez, Amparo; Menchón, José M

    2017-06-01

    Most individuals will gamble during their lifetime, yet only a select few will develop gambling disorder. Gray's Reinforcement Sensitivity Theory holds promise for providing insight into gambling disorder etiology and symptomatology as it ascertains that neurobiological differences in reward and punishment sensitivity play a crucial role in determining an individual's affect and motives. The aim of the study was to assess a mediational pathway, which included patients' sex, personality traits, reward and punishment sensitivity, and gambling-severity variables. The Sensitivity to Punishment and Sensitivity to Reward Questionnaire, the South Oaks Gambling Screen, the Symptom Checklist-Revised, and the Temperament and Character Inventory-Revised were administered to a sample of gambling disorder outpatients (N = 831), diagnosed according to DSM-5 criteria, attending a specialized outpatient unit. Sociodemographic variables were also recorded. A structural equation model found that both reward and punishment sensitivity were positively and directly associated with increased gambling severity, sociodemographic variables, and certain personality traits while also revealing a complex mediational role for these dimensions. To this end, our findings suggest that the Sensitivity to Punishment and Sensitivity to Reward Questionnaire could be a useful tool for gaining a better understanding of different gambling disorder phenotypes and developing tailored interventions.

  11. ON THE USE OF SHOT NOISE FOR PHOTON COUNTING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zmuidzinas, Jonas, E-mail: jonas@caltech.edu

    Lieu et al. have recently claimed that it is possible to substantially improve the sensitivity of radio-astronomical observations. In essence, their proposal is to make use of the intensity of the photon shot noise as a measure of the photon arrival rate. Lieu et al. provide a detailed quantum-mechanical calculation of a proposed measurement scheme that uses two detectors and conclude that this scheme avoids the sensitivity degradation that is associated with photon bunching. If correct, this result could have a profound impact on radio astronomy. Here I present a detailed analysis of the sensitivity attainable using shot-noise measurement schemesmore » that use either one or two detectors, and demonstrate that neither scheme can avoid the photon bunching penalty. I perform both semiclassical and fully quantum calculations of the sensitivity, obtaining consistent results, and provide a formal proof of the equivalence of these two approaches. These direct calculations are furthermore shown to be consistent with an indirect argument based on a correlation method that establishes an independent limit to the sensitivity of shot-noise measurement schemes. Furthermore, these calculations are directly applicable to the regime of interest identified by Lieu et al. Collectively, these results conclusively demonstrate that the photon-bunching sensitivity penalty applies to shot-noise measurement schemes just as it does to ordinary photon counting, in contradiction to the fundamental claim made by Lieu et al. The source of this contradiction is traced to a logical fallacy in their argument.« less

  12. New technologies for solar energy silicon - Cost analysis of BCL process

    NASA Technical Reports Server (NTRS)

    Yaws, C. L.; Li, K.-Y.; Fang, C. S.; Lutwack, R.; Hsu, G.; Leven, H.

    1980-01-01

    New technologies for producing polysilicon are being developed to provide lower cost material for solar cells which convert sunlight into electricity. This article presents results for the BCL Process, which produces the solar-cell silicon by reduction of silicon tetrachloride with zinc vapor. Cost, sensitivity, and profitability analysis results are presented based on a preliminary process design of a plant to produce 1000 metric tons/year of silicon by the BCL Process. Profitability analysis indicates a sales price of $12.1-19.4 per kg of silicon (1980 dollars) at a 0-25 per cent DCF rate of return on investment after taxes. These results indicate good potential for meeting the goal of providing lower cost material for silicon solar cells.

  13. Advancing Cell Biology Through Proteomics in Space and Time (PROSPECTS)*

    PubMed Central

    Lamond, Angus I.; Uhlen, Mathias; Horning, Stevan; Makarov, Alexander; Robinson, Carol V.; Serrano, Luis; Hartl, F. Ulrich; Baumeister, Wolfgang; Werenskiold, Anne Katrin; Andersen, Jens S.; Vorm, Ole; Linial, Michal; Aebersold, Ruedi; Mann, Matthias

    2012-01-01

    The term “proteomics” encompasses the large-scale detection and analysis of proteins and their post-translational modifications. Driven by major improvements in mass spectrometric instrumentation, methodology, and data analysis, the proteomics field has burgeoned in recent years. It now provides a range of sensitive and quantitative approaches for measuring protein structures and dynamics that promise to revolutionize our understanding of cell biology and molecular mechanisms in both human cells and model organisms. The Proteomics Specification in Time and Space (PROSPECTS) Network is a unique EU-funded project that brings together leading European research groups, spanning from instrumentation to biomedicine, in a collaborative five year initiative to develop new methods and applications for the functional analysis of cellular proteins. This special issue of Molecular and Cellular Proteomics presents 16 research papers reporting major recent progress by the PROSPECTS groups, including improvements to the resolution and sensitivity of the Orbitrap family of mass spectrometers, systematic detection of proteins using highly characterized antibody collections, and new methods for absolute as well as relative quantification of protein levels. Manuscripts in this issue exemplify approaches for performing quantitative measurements of cell proteomes and for studying their dynamic responses to perturbation, both during normal cellular responses and in disease mechanisms. Here we present a perspective on how the proteomics field is moving beyond simply identifying proteins with high sensitivity toward providing a powerful and versatile set of assay systems for characterizing proteome dynamics and thereby creating a new “third generation” proteomics strategy that offers an indispensible tool for cell biology and molecular medicine. PMID:22311636

  14. Laser-induced breakdown spectroscopy (LIBS), part II: review of instrumental and methodological approaches to material analysis and applications to different fields.

    PubMed

    Hahn, David W; Omenetto, Nicoló

    2012-04-01

    The first part of this two-part review focused on the fundamental and diagnostics aspects of laser-induced plasmas, only touching briefly upon concepts such as sensitivity and detection limits and largely omitting any discussion of the vast panorama of the practical applications of the technique. Clearly a true LIBS community has emerged, which promises to quicken the pace of LIBS developments, applications, and implementations. With this second part, a more applied flavor is taken, and its intended goal is summarizing the current state-of-the-art of analytical LIBS, providing a contemporary snapshot of LIBS applications, and highlighting new directions in laser-induced breakdown spectroscopy, such as novel approaches, instrumental developments, and advanced use of chemometric tools. More specifically, we discuss instrumental and analytical approaches (e.g., double- and multi-pulse LIBS to improve the sensitivity), calibration-free approaches, hyphenated approaches in which techniques such as Raman and fluorescence are coupled with LIBS to increase sensitivity and information power, resonantly enhanced LIBS approaches, signal processing and optimization (e.g., signal-to-noise analysis), and finally applications. An attempt is made to provide an updated view of the role played by LIBS in the various fields, with emphasis on applications considered to be unique. We finally try to assess where LIBS is going as an analytical field, where in our opinion it should go, and what should still be done for consolidating the technique as a mature method of chemical analysis. © 2012 Society for Applied Spectroscopy

  15. Cost-utility analysis of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses.

    PubMed

    Kaitani, Toshiko; Nakagami, Gojiro; Iizaka, Shinji; Fukuda, Takashi; Oe, Makoto; Igarashi, Ataru; Mori, Taketoshi; Takemura, Yukie; Mizokami, Yuko; Sugama, Junko; Sanada, Hiromi

    2015-01-01

    The high prevalence of severe pressure ulcers (PUs) is an important issue that requires to be highlighted in Japan. In a previous study, we devised an advanced PU management protocol to enable early detection of and intervention for deep tissue injury and critical colonization. This protocol was effective for preventing more severe PUs. The present study aimed to compare the cost-effectiveness of the care provided using an advanced PU management protocol, from a medical provider's perspective, implemented by trained wound, ostomy, and continence nurses (WOCNs), with that of conventional care provided by a control group of WOCNs. A Markov model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness ratio of advanced PU management compared with conventional care. The number of quality-adjusted life-years gained, and the cost in Japanese yen (¥) ($US1 = ¥120; 2015) was used as the outcome. Model inputs for clinical probabilities and related costs were based on our previous clinical trial results. Univariate sensitivity analyses were performed. Furthermore, a Bayesian multivariate probability sensitivity analysis was performed using Monte Carlo simulations with advanced PU management. Two different models were created for initial cohort distribution. For both models, the expected effectiveness for the intervention group using advanced PU management techniques was high, with a low expected cost value. The sensitivity analyses suggested that the results were robust. Intervention by WOCNs using advanced PU management techniques was more effective and cost-effective than conventional care. © 2015 by the Wound Healing Society.

  16. The effects of abdominal lipectomy in metabolic syndrome components and insulin sensitivity in females: A systematic review and meta-analysis.

    PubMed

    Seretis, Konstantinos; Goulis, Dimitrios G; Koliakos, Georgios; Demiri, Efterpi

    2015-12-01

    Adipose tissue is an endocrine organ, which is implicated in the pathogenesis of obesity, metabolic syndrome and diabetes. Lipectomy offers a unique opportunity to permanently reduce the absolute number of fat cells, though its functional role remains unclear. This systematic and meta-analysis review aims to assess the effect of abdominal lipectomy on metabolic syndrome components and insulin sensitivity in women. A predetermined protocol, established according to the Cochrane Handbook's recommendations, was used. An electronic search in MEDLINE, Scopus, the Cochrane Library and CENTRAL electronic databases was conducted from inception to May 14, 2015. This search was supplemented by a review of reference lists of potentially eligible studies and a manual search of key journals in the field of plastic surgery. Eligible studies were prospective studies with ≥1month of follow-up that included females only who underwent abdominal lipectomy and reported on parameters of metabolic syndrome and insulin sensitivity. The systematic review included 11 studies with a total of 271 individuals. Conflicting results were revealed, though most studies showed no significant metabolic effects after lipectomy. The meta-analysis included 4 studies with 140 subjects. No significant changes were revealed between lipectomy and control groups. This meta-analysis provides evidence that abdominal lipectomy in females does not affect significantly the components of metabolic syndrome and insulin sensitivity. Further high quality studies are needed to elucidate the potential metabolic effects of abdominal lipectomy. Systematic review registration PROSPERO CRD42015017564 (www.crd.york.ac.uk/PROSPERO). Copyright © 2015 Elsevier Inc. All rights reserved.

  17. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  18. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  19. Direct trace-elemental analysis of urine samples by laser ablation-inductively coupled plasma mass spectrometry after sample deposition on clinical filter papers.

    PubMed

    Aramendía, Maite; Rello, Luis; Vanhaecke, Frank; Resano, Martín

    2012-10-16

    Collection of biological fluids on clinical filter papers shows important advantages from a logistic point of view, although analysis of these specimens is far from straightforward. Concerning urine analysis, and particularly when direct trace elemental analysis by laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS) is aimed at, several problems arise, such as lack of sensitivity or different distribution of the analytes on the filter paper, rendering obtaining reliable quantitative results quite difficult. In this paper, a novel approach for urine collection is proposed, which circumvents many of these problems. This methodology consists on the use of precut filter paper discs where large amounts of sample can be retained upon a single deposition. This provides higher amounts of the target analytes and, thus, sufficient sensitivity, and allows addition of an adequate internal standard at the clinical lab prior to analysis, therefore making it suitable for a strategy based on unsupervised sample collection and ulterior analysis at referral centers. On the basis of this sampling methodology, an analytical method was developed for the direct determination of several elements in urine (Be, Bi, Cd, Co, Cu, Ni, Sb, Sn, Tl, Pb, and V) at the low μg L(-1) level by means of LA-ICPMS. The method developed provides good results in terms of accuracy and LODs (≤1 μg L(-1) for most of the analytes tested), with a precision in the range of 15%, fit-for-purpose for clinical control analysis.

  20. The impact of missing trauma data on predicting massive transfusion

    PubMed Central

    Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.

    2013-01-01

    INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514

  1. Costs of a school-based dental mobile service in South Africa.

    PubMed

    Molete, M P; Chola, L; Hofman, K J

    2016-10-19

    The burden of untreated tooth decay remains high and oral healthcare utilisation is low for the majority of children in South Africa. There is need for alternative methods of improving access to low cost oral healthcare. The mobile dental unit of the University of the Witwatersrand (Wits) has been operational for over 25 years, providing alternative oral healthcare to children and adults who otherwise would not have access. The aim of this study was to conduct a cost-analysis of a school based oral healthcare program in the Wits mobile dental unit. The objectives were to estimate the general costs of the school based program, costs of oral healthcare per patient and the economic implications of providing services at scale. In 2012, the Wits mobile dental unit embarked on a 5 month project to provide oral healthcare in four schools located around Johannesburg. Cost and service use data were retrospectively collected from the program records for the cost analysis, which was undertaken from a provider perspective. The costs considered included both financial and economic costs. Capital costs were annualised and discounted at 6 %. One way sensitivity tests were conducted for uncertain parameters. The total economic costs were R813.701 (US$76,048). The cost of screening and treatment per patient were R331 (US$31) and R743 (US$69) respectively. Furthermore, fissure sealants cost the least out of the treatments provided. The sensitivity analysis indicated that the Wits mobile dental unit was cost efficient at 25 % allocation of staff time and that a Dental Therapy led service could save costs by 9.1 %. Expanding the services to a wider population of children and utilising Dental Therapists as key personnel could improve the efficiency of mobile dental healthcare provision.

  2. AKAP150-mediated TRPV1 sensitization is disrupted by calcium/calmodulin

    PubMed Central

    2011-01-01

    Background The transient receptor potential vanilloid type1 (TRPV1) is expressed in nociceptive sensory neurons and is sensitive to phosphorylation. A-Kinase Anchoring Protein 79/150 (AKAP150) mediates phosphorylation of TRPV1 by Protein Kinases A and C, modulating channel activity. However, few studies have focused on the regulatory mechanisms that control AKAP150 association with TRPV1. In the present study, we identify a role for calcium/calmodulin in controlling AKAP150 association with, and sensitization of, TRPV1. Results In trigeminal neurons, intracellular accumulation of calcium reduced AKAP150 association with TRPV1 in a manner sensitive to calmodulin antagonism. This was also observed in transfected Chinese hamster ovary (CHO) cells, providing a model for conducting molecular analysis of the association. In CHO cells, the deletion of the C-terminal calmodulin-binding site of TRPV1 resulted in greater association with AKAP150, and increased channel activity. Furthermore, the co-expression of wild-type calmodulin in CHOs significantly reduced TRPV1 association with AKAP150, as evidenced by total internal reflective fluorescence-fluorescence resonance energy transfer (TIRF-FRET) analysis and electrophysiology. Finally, dominant-negative calmodulin co-expression increased TRPV1 association with AKAP150 and increased basal and PKA-sensitized channel activity. Conclusions the results from these studies indicate that calcium/calmodulin interferes with the association of AKAP150 with TRPV1, potentially extending resensitization of the channel. PMID:21569553

  3. A comparison of standard serological tests for the diagnosis of bovine brucellosis in Canada.

    PubMed Central

    Stemshorn, B W; Forbes, L B; Eaglesome, M D; Nielsen, K H; Robertson, F J; Samagh, B S

    1985-01-01

    Six agglutination and two complement fixation tests were compared with respect to specificity, sensitivity and relative sensitivity for the serodiagnosis of bovine brucellosis. Based on 1051 sera from brucellosis free herds, the specificity of the tests was 98.9% for the buffered plate antigen test (BPAT), 99.2% and 99.3% for the standard tube and plate agglutination tests (STAT and SPAT), respectively, and 99.8% for the 2-mercaptoethanol test (2MET). On this small sample, the rose bengal plate test (RBPT), card test (CARD) and the complement fixation test (CFT) correctly classed all sera as negative. On a sample of 167 culture positive cattle, the sensitivities of the tests were CFT: 79.0%, BPAT: 75.4, RBPT: 74.9%, CARD: 74.3%, SPAT: 73.1%, STAT: 68.9%, and 2MET: 59.9%. All tests combined detected only 82% of these infected cattle. Analysis of the relative sensitivity of the six agglutination tests gave the following ranking: BPAT greater than RBPT greater than CARD greater than SPAT greater than STAT. The 2MET ranked between the BPAT and RBPT or between the RBPT and CARD depending on the analysis used. The use of the BPAT as a screening test is recommended provided that a test of high specificity and sensitivity such as the CFT is used to confirm screening test reactions. PMID:4075239

  4. Health impact assessment of a skin sensitizer: Analysis of potential policy measures aimed at reducing geraniol concentrations in personal care products and household cleaning products.

    PubMed

    Jongeneel, W P; Delmaar, J E; Bokkers, B G H

    2018-06-08

    A methodology to assess the health impact of skin sensitizers is introduced, which consists of the comparison of the probabilistic aggregated exposure with a probabilistic (individual) human sensitization or elicitation induction dose. The health impact of potential policy measures aimed at reducing the concentration of a fragrance allergen, geraniol, in consumer products is analysed in a simulated population derived from multiple product use surveys. Our analysis shows that current dermal exposure to geraniol from personal care and household cleaning products lead to new cases of contact allergy and induce clinical symptoms for those already sensitized. We estimate that this exposure results yearly in 34 new cases of geraniol contact allergy per million consumers in Western and Northern Europe, mainly due to exposure to household cleaning products. About twice as many consumers (60 per million) are projected to suffer from clinical symptoms due to re-exposure to geraniol. Policy measures restricting geraniol concentrations to <0.01% will noticeably reduce new cases of sensitization and decrease the number of people with clinical symptoms as well as the frequency of occurrence of these clinical symptoms. The estimated numbers should be interpreted with caution and provide only a rough indication of the health impact. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  5. AKAP150-mediated TRPV1 sensitization is disrupted by calcium/calmodulin.

    PubMed

    Chaudhury, Sraboni; Bal, Manjot; Belugin, Sergei; Shapiro, Mark S; Jeske, Nathaniel A

    2011-05-14

    The transient receptor potential vanilloid type1 (TRPV1) is expressed in nociceptive sensory neurons and is sensitive to phosphorylation. A-Kinase Anchoring Protein 79/150 (AKAP150) mediates phosphorylation of TRPV1 by Protein Kinases A and C, modulating channel activity. However, few studies have focused on the regulatory mechanisms that control AKAP150 association with TRPV1. In the present study, we identify a role for calcium/calmodulin in controlling AKAP150 association with, and sensitization of, TRPV1. In trigeminal neurons, intracellular accumulation of calcium reduced AKAP150 association with TRPV1 in a manner sensitive to calmodulin antagonism. This was also observed in transfected Chinese hamster ovary (CHO) cells, providing a model for conducting molecular analysis of the association. In CHO cells, the deletion of the C-terminal calmodulin-binding site of TRPV1 resulted in greater association with AKAP150, and increased channel activity. Furthermore, the co-expression of wild-type calmodulin in CHOs significantly reduced TRPV1 association with AKAP150, as evidenced by total internal reflective fluorescence-fluorescence resonance energy transfer (TIRF-FRET) analysis and electrophysiology. Finally, dominant-negative calmodulin co-expression increased TRPV1 association with AKAP150 and increased basal and PKA-sensitized channel activity. the results from these studies indicate that calcium/calmodulin interferes with the association of AKAP150 with TRPV1, potentially extending resensitization of the channel.

  6. Sensitivity and accuracy of hybrid fluorescence-mediated tomography in deep tissue regions.

    PubMed

    Rosenhain, Stefanie; Al Rawashdeh, Wa'el; Kiessling, Fabian; Gremse, Felix

    2017-09-01

    Fluorescence-mediated tomography (FMT) enables noninvasive assessment of the three-dimensional distribution of near-infrared fluorescence in mice. The combination with micro-computed tomography (µCT) provides anatomical data, enabling improved fluorescence reconstruction and image analysis. The aim of our study was to assess sensitivity and accuracy of µCT-FMT under realistic in vivo conditions in deeply-seated regions. Accordingly, we acquired fluorescence reflectance images (FRI) and µCT-FMT scans of mice which were prepared with rectal insertions with different amounts of fluorescent dye. Default and high-sensitivity scans were acquired and background signal was analyzed for three FMT channels (670 nm, 745 nm, and 790 nm). Analysis was performed for the original and an improved FMT reconstruction using the µCT data. While FRI and the original FMT reconstruction could detect 100 pmol, the improved FMT reconstruction could detect 10 pmol and significantly improved signal localization. By using a finer sampling grid and increasing the exposure time, the sensitivity could be further improved to detect 0.5 pmol. Background signal was highest in the 670 nm channel and most prominent in the gastro-intestinal tract and in organs with high relative amounts of blood. In conclusion, we show that µCT-FMT allows sensitive and accurate assessment of fluorescence in deep tissue regions. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. TASI Lectures on Cosmological Observables and String Theory

    NASA Astrophysics Data System (ADS)

    Silverstein, Eva

    These lectures provide an updated pedagogical treatment of the theoretical structure and phenomenology of some basic mechanisms for inflation, along with an overview of the structure of cosmological uplifts of holographic duality. A full treatment of the problem requires `ultraviolet completion' because of the sensitivity of inflation to quantum gravity effects, including back reaction and non-adiabatic production of heavy degrees of freedom. Cosmological observations imply accelerated expansion of the late universe, and provide increasingly precise constraints and discovery potential on the amplitude and shape of primordial tensor and scalar perturbations, and some of their correlation functions. Most backgrounds of string theory have positive potential energy, with a rich but still highly constrained landscape of solutions. The theory contains novel mechanisms for inflation, some subject to significant observational tests, with highly UV-sensitive tensor mode measurements being a prime example along with certain shapes of primordial correlation functions. Although the detailed ultraviolet completion is not accessible experimentally, some of these mechanisms directly stimulate a more systematic analysis of the space of low energy theories and signatures relevant for analysis of data, which is sensitive to physics orders of magnitude above the energy scale of inflation as a result of long time evolution (dangerous irrelevance) and the substantial amount of data (allowing constraints on quantities with signal/noise. Portions of these lectures appeared previously in Les Houches 2013, "Post-Planck Cosmology".

  8. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  9. Importance of optimizing chromatographic conditions and mass spectrometric parameters for supercritical fluid chromatography/mass spectrometry.

    PubMed

    Fujito, Yuka; Hayakawa, Yoshihiro; Izumi, Yoshihiro; Bamba, Takeshi

    2017-07-28

    Supercritical fluid chromatography/mass spectrometry (SFC/MS) has great potential in high-throughput and the simultaneous analysis of a wide variety of compounds, and it has been widely used in recent years. The use of MS for detection provides the advantages of high sensitivity and high selectivity. However, the sensitivity of MS detection depends on the chromatographic conditions and MS parameters. Thus, optimization of MS parameters corresponding to the SFC condition is mandatory for maximizing performance when connecting SFC to MS. The aim of this study was to reveal a way to decide the optimum composition of the mobile phase and the flow rate of the make-up solvent for MS detection in a wide range of compounds. Additionally, we also showed the basic concept for determination of the optimum values of the MS parameters focusing on the MS detection sensitivity in SFC/MS analysis. To verify the versatility of these findings, a total of 441 pesticides with a wide polarity range (logP ow from -4.21 to 7.70) and pKa (acidic, neutral and basic). In this study, a new SFC-MS interface was used, which can transfer the entire volume of eluate into the MS by directly coupling the SFC with the MS. This enabled us to compare the sensitivity or optimum MS parameters for MS detection between LC/MS and SFC/MS for the same sample volume introduced into the MS. As a result, it was found that the optimum values of some MS parameters were completely different from those of LC/MS, and that SFC/MS-specific optimization of the analytical conditions is required. Lastly, we evaluated the sensitivity of SFC/MS using fully optimized analytical conditions. As a result, we confirmed that SFC/MS showed much higher sensitivity than LC/MS when the analytical conditions were fully optimized for SFC/MS; and the high sensitivity also increase the number of the compounds that can be detected with good repeatability in real sample analysis. This result indicates that SFC/MS has potential for practical use in the multiresidue analysis of a wide range of compounds that requires high sensitivity. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  11. Experimental demonstration and theoretical explanation of the efficiency of the nano-structured silicon as the transducer for optical immune biosensors

    NASA Astrophysics Data System (ADS)

    Starodub, Nickolaj F.; Slyshyk, Nelya F.; Shavanova, Kateryna E.; Karpyuk, Andrij; Mel'nichenko, Mykola M.; Zherdev, Anatolij V.; Dzantiev, Boris B.

    2014-10-01

    It is presented the experimental results about the investigations of the efficiency of the structured nano-pourous silicon (sNPS) application as transducer in the immune biosensors designed for the control of retroviral bovine leucosis (RBL) and the determination of the level such mycotoxins as T2 and patulin among environmental objects. Today, there is an arsenal of the traditional immunological methods that allow for the biochemical diagnostics of the above diseases and control of toxins but they are deeply routine and can not provide the requirements of practice for express analysis, its low cost and simplicity. Early to provide practical demands we developed immune biosensors based on SPR, TIRE and thermistors. To find more simple variant of the assay we studied the efficiency sNPS as trasducer in immune biosensor. The registration of the specific signals was made by measuremets of level of chemiluminescence (ChL) or photocurrent. The sensitivity of biosensor for both variants of the specific signal registration at the determination of T2 and patulin was about 10-20 ng/ml. Sensitivity analysis of RBL by this immune biosensors exceeds traditionally used approaches including the ELISA-method too. The optimal serum dilution of blood at the screening leukemia should be no less than 1:100, or even 1:500. The immune biosensor may be applied too for express screening leucosis through analysis of milk. In this case the optimal serum dilution of milk should be about 1:20. The total time of analysis including all steps (immobilization of specific Ab or antigens on the transducer surface and measurements) was about 40 min and it may be a sharp decline if the above mentione sensitive elements will be immobilized preliminary measurements. It is concluded that the proposed type of transducer for immune biosensor is effective for analysis of mycotoxins in screening regime.

  12. Cost of providing injectable contraceptives through a community-based social marketing program in Tigray, Ethiopia.

    PubMed

    Prata, Ndola; Downing, Janelle; Bell, Suzanne; Weidert, Karen; Godefay, Hagos; Gessessew, Amanuel

    2016-06-01

    To provide a cost analysis of an injectable contraceptive program combining community-based distribution and social marketing in Tigray, Ethiopia. We conducted a cost analysis, modeling the costs and programmatic outcomes of the program's initial implementation in 3 districts of Tigray, Ethiopia. Costs were estimated from a review of program expense records, invoices, and interviews with health workers. Programmatic outcomes include number of injections and couple-year of protection (CYP) provided. We performed a sensitivity analysis on the average number of injections provided per month by community health workers (CHWs), the cost of the commodity, and the number of CHWs trained. The average programmatic CYP was US $17.91 for all districts with a substantial range from US $15.48-38.09 per CYP across districts. Direct service cost was estimated at US $2.96 per CYP. The cost per CYP was slightly sensitive to the commodity cost of the injectable contraceptives and the number of CHWs. The capacity of each CHW, measured by the number of injections sold, was a key input that drove the cost per CYP of this model. With a direct service cost of US $2.96 per CYP, this study demonstrates the potential cost of community-based social marketing programs of injectable contraceptives. The findings suggest that the cost of social marketing of contraceptives in rural communities is comparable to other delivery mechanisms with regards to CYP, but further research is needed to determine the full impact and cost-effectiveness for women and communities beyond what is measured in CYP. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  14. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  15. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  16. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  17. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  18. Constitutive BAK activation as a determinant of drug sensitivity in malignant lymphohematopoietic cells.

    PubMed

    Dai, Haiming; Ding, Husheng; Meng, X Wei; Peterson, Kevin L; Schneider, Paula A; Karp, Judith E; Kaufmann, Scott H

    2015-10-15

    Mitochondrial outer membrane permeabilization (MOMP), a key step in the intrinsic apoptotic pathway, is incompletely understood. Current models emphasize the role of BH3-only BCL2 family members in BAX and BAK activation. Here we demonstrate concentration-dependent BAK autoactivation under cell-free conditions and provide evidence that this autoactivation plays a key role in regulating the intrinsic apoptotic pathway in intact cells. In particular, we show that up to 80% of BAK (but not BAX) in lymphohematopoietic cell lines is oligomerized and bound to anti-apoptotic BCL2 family members in the absence of exogenous death stimuli. The extent of this constitutive BAK oligomerization is diminished by BAK knockdown and unaffected by BIM or PUMA down-regulation. Further analysis indicates that sensitivity of cells to BH3 mimetics reflects the identity of the anti-apoptotic proteins to which BAK is constitutively bound, with extensive BCLXL•BAK complexes predicting navitoclax sensitivity, and extensive MCL1•BAK complexes predicting A1210477 sensitivity. Moreover, high BAK expression correlates with sensitivity of clinical acute myelogenous leukemia to chemotherapy, whereas low BAK levels correlate with resistance and relapse. Collectively, these results inform current understanding of MOMP and provide new insight into the ability of BH3 mimetics to induce apoptosis without directly activating BAX or BAK. © 2015 Dai et al.; Published by Cold Spring Harbor Laboratory Press.

  19. Chemical Analysis through CL-Detection Assisted by Periodate Oxidation

    PubMed Central

    Evmiridis, Nicholaos P.; Vlessidis, Athanasios G.; Thanasoulias, Nicholas C.

    2007-01-01

    The progress of the research work of the author and his colleagues on the field of CL-emission generated by pyrogallol oxidation and further application for the direct determination of periodate and indirect or direct determination of other compounds through flow-injection manifold/CL-detection set up is described. The instrumentation used for these studies was a simple flow-injection manifold that provides good reproducibility, coupled to a red sensitive photomultiplier that gives sensitive CL-detection. In addition, recent reports on studies and analytical methods based on CL-emission generated by periodate oxidation by other authors are included. PMID:17611611

  20. Smart zwitterionic membranes with on/off behavior for protein transport.

    PubMed

    Su, Yanlei; Zheng, Lili; Li, Chao; Jiang, Zhongyi

    2008-09-25

    Poly(acrylonitrile) (PAN)-based zwitterionic membranes, composed of PAN and poly( N, N-dimethyl- N-methacryloxyethyl- N-(3-sulfopropyl) copolymer, are electrolyte-sensitive smart membranes. The hydrophilicity was increased and protein adsorption was remarkably decreased for the membranes in response to environmental stimuli. FTIR spectroscopic analysis directly provided molecular-level observation of the enhanced dissociation and hydration of zwitterionic sulfobetaine dipoles at higher electrolyte concentrations. The smart PAN-based zwitterionic membranes can close or open channels for protein transport under different NaCl concentrations. The electrolyte-sensitive switch of on/off behavior for protein transport is reversible.

  1. Quantifying the bending of bilayer temperature-sensitive hydrogels

    NASA Astrophysics Data System (ADS)

    Dong, Chenling; Chen, Bin

    2017-04-01

    Stimuli-responsive hydrogels can serve as manipulators, including grippers, sensors, etc., where structures can undergo significant bending. Here, a finite-deformation theory is developed to quantify the evolution of the curvature of bilayer temperature-sensitive hydrogels when subjected to a temperature change. Analysis of the theory indicates that there is an optimal thickness ratio to acquire the largest curvature in the bilayer and also suggests that the sign or the magnitude of the curvature can be significantly affected by pre-stretches or small pores in the bilayer. This study may provide important guidelines in fabricating temperature-responsive bilayers with desirable mechanical performance.

  2. In-line microfluidic refractometer based on C-shaped fiber assisted photonic crystal fiber Sagnac interferometer.

    PubMed

    Wu, Chuang; Tse, Ming-Leung Vincent; Liu, Zhengyong; Guan, Bai-Ou; Lu, Chao; Tam, Hwa-Yaw

    2013-09-01

    We propose and demonstrate a highly sensitive in-line photonic crystal fiber (PCF) microfluidic refractometer. Ultrathin C-shaped fibers are spliced in-between the PCF and standard single-mode fibers. The C-shaped fibers provide openings for liquid to flow in and out of the PCF. Based on a Sagnac interferometer, the refractive index (RI) response of the device is investigated theoretically and experimentally. A high sensitivity of 6621 nm/RIU for liquid RI from 1.330 to 1.333 is achieved in the experiment, which agrees well with the theoretical analysis.

  3. Sensitivity of Narrative Organization Measures Using Narrative Retells Produced by Young School-Age Children

    ERIC Educational Resources Information Center

    Heilmann, John; Miller, Jon F.; Nockerts, Ann

    2010-01-01

    Analysis of children's productions of oral narratives provides a rich description of children's oral language skills. However, measures of narrative organization can be directly affected by both developmental and task-based performance constraints which can make a measure insensitive and inappropriate for a particular population and/or sampling…

  4. 75 FR 28616 - Agilent Technologies, Inc.; Analysis of the Agreement Containing Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... equipment used to test cell phones and communications equipment, machines that determine the contents of... employ various analytical techniques to test samples of many types, are used by academic researchers... require the sensitivity provided by ICP-MS, and because many customers perform tests pursuant to...

  5. Colorimetric analysis of saliva–alcohol test strips by smartphone-based instruments using machine-learning algorithms

    USDA-ARS?s Scientific Manuscript database

    Strip lateral flow assays, similar to a home pregnancy test, are used widely in food safety applications to provide rapid and accurate tests for the presence of specific foodborne pathogens or other contaminants. Though these tests are very rapid, they are not very sensitive, are not quantitative, a...

  6. Guidance for Organisational Strategy on Knowledge to Action from Conceptual Frameworks and Practice

    ERIC Educational Resources Information Center

    Willis, Cameron; Riley, Barbara; Lewis, Mary; Stockton, Lisa; Yessis, Jennifer

    2017-01-01

    This paper aims to provide public health organisations involved in chronic disease prevention with conceptual and practical guidance for developing contextually sensitive knowledge-to-action (KTA) strategies. Methods involve an analysis of 13 relevant conceptual KTA frameworks, and a review of three case examples of organisations with active KTA…

  7. Methods of recording and analysing cough sounds.

    PubMed

    Subburaj, S; Parvez, L; Rajagopalan, T G

    1996-01-01

    Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.

  8. Field-Effect Biosensors for On-Site Detection: Recent Advances and Promising Targets.

    PubMed

    Choi, Jaebin; Seong, Tae Wha; Jeun, Minhong; Lee, Kwan Hyi

    2017-10-01

    There is an explosive interest in the immediate and cost-effective analysis of field-collected biological samples, as many advanced biodetection tools are highly sensitive, yet immobile. On-site biosensors are portable and convenient sensors that provide detection results at the point of care. They are designed to secure precision in highly ionic and heterogeneous solutions with minimal hardware. Among various methods that are capable of such analysis, field-effect biosensors are promising candidates due to their unique sensitivity, manufacturing scalability, and integrability with computational circuitry. Recent developments in nanotechnological surface modification show promising results in sensing from blood, serum, and urine. This report gives a particular emphasis on the on-site efficacy of recently published field-effect biosensors, specifically, detection limits in physiological solutions, response times, and scalability. The survey of the properties and existing detection methods of four promising biotargets, exosomes, bacteria, viruses, and metabolites, aims at providing a roadmap for future field-effect and other on-site biosensors. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Characterization of Colloidal Quantum Dot Ligand Exchange by X-ray Photoelectron Spectroscopy

    NASA Astrophysics Data System (ADS)

    Atewologun, Ayomide; Ge, Wangyao; Stiff-Roberts, Adrienne D.

    2013-05-01

    Colloidal quantum dots (CQDs) are chemically synthesized semiconductor nanoparticles with size-dependent wavelength tunability. Chemical synthesis of CQDs involves the attachment of long organic surface ligands to prevent aggregation; however, these ligands also impede charge transport. Therefore, it is beneficial to exchange longer surface ligands for shorter ones for optoelectronic devices. Typical characterization techniques used to analyze surface ligand exchange include Fourier-transform infrared spectroscopy, x-ray diffraction, transmission electron microscopy, and nuclear magnetic resonance spectroscopy, yet these techniques do not provide a simultaneously direct, quantitative, and sensitive method for evaluating surface ligands on CQDs. In contrast, x-ray photoelectron spectroscopy (XPS) can provide nanoscale sensitivity for quantitative analysis of CQD surface ligand exchange. A unique aspect of this work is that a fingerprint is identified for shorter surface ligands by resolving the regional XPS spectrum corresponding to different types of carbon bonds. In addition, a deposition technique known as resonant infrared matrix-assisted pulsed laser evaporation is used to improve the CQD film uniformity such that stronger XPS signals are obtained, enabling more accurate analysis of the ligand exchange process.

  10. In vivo optical microscopy of peripheral nerve myelination with polarization sensitive-optical coherence tomography

    PubMed Central

    Henry, Francis P.; Wang, Yan; Rodriguez, Carissa L. R.; Randolph, Mark A.; Rust, Esther A. Z.; Winograd, Jonathan M.; de Boer, Johannes F.; Park, B. Hyle

    2015-01-01

    Abstract. Assessing nerve integrity and myelination after injury is necessary to provide insight for treatment strategies aimed at restoring neuromuscular function. Currently, this is largely done with electrical analysis, which lacks direct quantitative information. In vivo optical imaging with sufficient imaging depth and resolution could be used to assess the nerve microarchitecture. In this study, we examine the use of polarization sensitive-optical coherence tomography (PS-OCT) to quantitatively assess the sciatic nerve microenvironment through measurements of birefringence after applying a nerve crush injury in a rat model. Initial loss of function and subsequent recovery were demonstrated by calculating the sciatic function index (SFI). We found that the PS-OCT phase retardation slope, which is proportional to birefringence, increased monotonically with the SFI. Additionally, histomorphometric analysis of the myelin thickness and g-ratio shows that the PS-OCT slope is a good indicator of myelin health and recovery after injury. These results demonstrate that PS-OCT is capable of providing nondestructive and quantitative assessment of nerve health after injury and shows promise for continued use both clinically and experimentally in neuroscience. PMID:25858593

  11. In vivo optical microscopy of peripheral nerve myelination with polarization sensitive-optical coherence tomography.

    PubMed

    Henry, Francis P; Wang, Yan; Rodriguez, Carissa L R; Randolph, Mark A; Rust, Esther A Z; Winograd, Jonathan M; de Boer, Johannes F; Park, B Hyle

    2015-04-01

    Assessing nerve integrity and myelination after injury is necessary to provide insight for treatment strategies aimed at restoring neuromuscular function. Currently, this is largely done with electrical analysis, which lacks direct quantitative information. In vivo optical imaging with sufficient imaging depth and resolution could be used to assess the nerve microarchitecture. In this study, we examine the use of polarization sensitive-optical coherence tomography (PS-OCT) to quantitatively assess the sciatic nerve microenvironment through measurements of birefringence after applying a nerve crush injury in a rat model. Initial loss of function and subsequent recovery were demonstrated by calculating the sciatic function index (SFI). We found that the PS-OCT phase retardation slope, which is proportional to birefringence, increased monotonically with the SFI. Additionally, histomorphometric analysis of the myelin thickness and g-ratio shows that the PS-OCT slope is a good indicator of myelin health and recovery after injury. These results demonstrate that PS-OCT is capable of providing nondestructive and quantitative assessment of nerve health after injury and shows promise for continued use both clinically and experimentally in neuroscience.

  12. Ellipsoidal analysis of coordination polyhedra

    PubMed Central

    Cumby, James; Attfield, J. Paul

    2017-01-01

    The idea of the coordination polyhedron is essential to understanding chemical structure. Simple polyhedra in crystalline compounds are often deformed due to structural complexity or electronic instabilities so distortion analysis methods are useful. Here we demonstrate that analysis of the minimum bounding ellipsoid of a coordination polyhedron provides a general method for studying distortion, yielding parameters that are sensitive to various orders in metal oxide examples. Ellipsoidal analysis leads to discovery of a general switching of polyhedral distortions at symmetry-disallowed transitions in perovskites that may evidence underlying coordination bistability, and reveals a weak off-centre ‘d5 effect' for Fe3+ ions that could be exploited in multiferroics. Separating electronic distortions from intrinsic deformations within the low temperature superstructure of magnetite provides new insights into the charge and trimeron orders. Ellipsoidal analysis can be useful for exploring local structure in many materials such as coordination complexes and frameworks, organometallics and organic molecules. PMID:28146146

  13. 13C labeling analysis of sugars by high resolution-mass spectrometry for metabolic flux analysis.

    PubMed

    Acket, Sébastien; Degournay, Anthony; Merlier, Franck; Thomasset, Brigitte

    2017-06-15

    Metabolic flux analysis is particularly complex in plant cells because of highly compartmented metabolism. Analysis of free sugars is interesting because it provides data to define fluxes around hexose, pentose, and triose phosphate pools in different compartment. In this work, we present a method to analyze the isotopomer distribution of free sugars labeled with carbon 13 using a liquid chromatography-high resolution mass spectrometry, without derivatized procedure, adapted for Metabolic flux analysis. Our results showed a good sensitivity, reproducibility and better accuracy to determine isotopic enrichments of free sugars compared to our previous methods [5, 6]. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Development of IR Contrast Data Analysis Application for Characterizing Delaminations in Graphite-Epoxy Structures

    NASA Technical Reports Server (NTRS)

    Havican, Marie

    2012-01-01

    Objective: Develop infrared (IR) flash thermography application based on use of a calibration standard for inspecting graphite-epoxy laminated/honeycomb structures. Background: Graphite/Epoxy composites (laminated and honeycomb) are widely used on NASA programs. Composite materials are susceptible for impact damage that is not readily detected by visual inspection. IR inspection can provide required sensitivity to detect surface damage in composites during manufacturing and during service. IR contrast analysis can provide characterization of depth, size and gap thickness of impact damage. Benefits/Payoffs: The research provides an empirical method of calibrating the flash thermography response in nondestructive evaluation. A physical calibration standard with artificial flaws such as flat bottom holes with desired diameter and depth values in a desired material is used in calibration. The research devises several probability of detection (POD) analysis approaches to enable cost effective POD study to meet program requirements.

  15. Analyzing Feedback Control Systems

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Downing, John P.

    1987-01-01

    Interactive controls analysis (INCA) program developed to provide user-friendly environment for design and analysis of linear control systems, primarily feedback control. Designed for use with both small- and large-order systems. Using interactive-graphics capability, INCA user quickly plots root locus, frequency response, or time response of either continuous-time system or sampled-data system. Configuration and parameters easily changed, allowing user to design compensation networks and perform sensitivity analyses in very convenient manner. Written in Pascal and FORTRAN.

  16. The rise of multiple imputation: a review of the reporting and implementation of the method in medical research.

    PubMed

    Hayati Rezvan, Panteha; Lee, Katherine J; Simpson, Julie A

    2015-04-07

    Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Many academic journals now emphasise the importance of reporting information regarding missing data and proposed guidelines for documenting the application of MI have been published. This review evaluated the reporting of missing data, the application of MI including the details provided regarding the imputation model, and the frequency of sensitivity analyses within the MI framework in medical research articles. A systematic review of articles published in the Lancet and New England Journal of Medicine between January 2008 and December 2013 in which MI was implemented was carried out. We identified 103 papers that used MI, with the number of papers increasing from 11 in 2008 to 26 in 2013. Nearly half of the papers specified the proportion of complete cases or the proportion with missing data by each variable. In the majority of the articles (86%) the imputed variables were specified. Of the 38 papers (37%) that stated the method of imputation, 20 used chained equations, 8 used multivariate normal imputation, and 10 used alternative methods. Very few articles (9%) detailed how they handled non-normally distributed variables during imputation. Thirty-nine papers (38%) stated the variables included in the imputation model. Less than half of the papers (46%) reported the number of imputations, and only two papers compared the distribution of imputed and observed data. Sixty-six papers presented the results from MI as a secondary analysis. Only three articles carried out a sensitivity analysis following MI to assess departures from the missing at random assumption, with details of the sensitivity analyses only provided by one article. This review outlined deficiencies in the documenting of missing data and the details provided about imputation. Furthermore, only a few articles performed sensitivity analyses following MI even though this is strongly recommended in guidelines. Authors are encouraged to follow the available guidelines and provide information on missing data and the imputation process.

  17. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    PubMed

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  18. Comparison of the levonorgestrel-releasing intrauterine system, hysterectomy, and endometrial ablation for heavy menstrual bleeding in a decision analysis model.

    PubMed

    Louie, Michelle; Spencer, Jennifer; Wheeler, Stephanie; Ellis, Victoria; Toubia, Tarek; Schiff, Lauren D; Siedhoff, Matthew T; Moulder, Janelle K

    2017-11-01

    A better understanding of the relative risks and benefits of common treatment options for abnormal uterine bleeding (AUB) can help providers and patients to make balanced, evidence-based decisions. To provide comparative estimates of clinical outcomes after placement of levonorgestrel-releasing intrauterine system (LNG-IUS), ablation, or hysterectomy for AUB. A PubMED search was done using combinations of search terms related to abnormal uterine bleeding, LNG-IUS, hysterectomy, endometrial ablation, cost-benefit analysis, cost-effectiveness, and quality-adjusted life years. Full articles published in 2006-2016 available in English comparing at least two treatment modalities of interest among women of reproductive age with AUB were included. A decision tree was generated to compare clinical outcomes in a hypothetical cohort of 100 000 premenopausal women with nonmalignant AUB. We evaluated complications, mortality, and treatment outcomes over a 5-year period, calculated cumulative quality-adjusted life years (QALYs), and conducted probabilistic sensitivity analysis. Levonorgestrel-releasing intrauterine system had the highest number of QALYs (406 920), followed by hysterectomy (403 466), non-resectoscopic ablation (399 244), and resectoscopic ablation (395 827). Ablation had more treatment failures and complications than LNG-IUS and hysterectomy. Findings were robust in probabilistic sensitivity analysis. Levonorgestrel-releasing intrauterine system and hysterectomy outperformed endometrial ablation for treatment of AUB. © 2017 International Federation of Gynecology and Obstetrics.

  19. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  20. MOVES sensitivity analysis update : Transportation Research Board Summer Meeting 2012 : ADC-20 Air Quality Committee

    DOT National Transportation Integrated Search

    2012-01-01

    OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study

  1. MOVES regional level sensitivity analysis

    DOT National Transportation Integrated Search

    2012-01-01

    The MOVES Regional Level Sensitivity Analysis was conducted to increase understanding of the operations of the MOVES Model in regional emissions analysis and to highlight the following: : the relative sensitivity of selected MOVES Model input paramet...

  2. Rapid and accurate detection of urinary pathogens by mobile IMS-based electronic nose: a proof-of-principle study.

    PubMed

    Roine, Antti; Saviauk, Taavi; Kumpulainen, Pekka; Karjalainen, Markus; Tuokko, Antti; Aittoniemi, Janne; Vuento, Risto; Lekkala, Jukka; Lehtimäki, Terho; Tammela, Teuvo L; Oksala, Niku K J

    2014-01-01

    Urinary tract infection (UTI) is a common disease with significant morbidity and economic burden, accounting for a significant part of the workload in clinical microbiology laboratories. Current clinical chemisty point-of-care diagnostics rely on imperfect dipstick analysis which only provides indirect and insensitive evidence of urinary bacterial pathogens. An electronic nose (eNose) is a handheld device mimicking mammalian olfaction that potentially offers affordable and rapid analysis of samples without preparation at athmospheric pressure. In this study we demonstrate the applicability of ion mobility spectrometry (IMS) -based eNose to discriminate the most common UTI pathogens from gaseous headspace of culture plates rapidly and without sample preparation. We gathered a total of 101 culture samples containing four most common UTI bacteries: E. coli, S. saprophyticus, E. faecalis, Klebsiella spp and sterile culture plates. The samples were analyzed using ChemPro 100i device, consisting of IMS cell and six semiconductor sensors. Data analysis was conducted by linear discriminant analysis (LDA) and logistic regression (LR). The results were validated by leave-one-out and 5-fold cross validation analysis. In discrimination of sterile and bacterial samples sensitivity of 95% and specificity of 97% were achieved. The bacterial species were identified with sensitivity of 95% and specificity of 96% using eNose as compared to urine bacterial cultures. These findings strongly demonstrate the ability of our eNose to discriminate bacterial cultures and provides a proof of principle to use this method in urinanalysis of UTI.

  3. Spatially Resolved Chemical Imaging for Biosignature Analysis: Terrestrial and Extraterrestrial Examples

    NASA Astrophysics Data System (ADS)

    Bhartia, R.; Wanger, G.; Orphan, V. J.; Fries, M.; Rowe, A. R.; Nealson, K. H.; Abbey, W. J.; DeFlores, L. P.; Beegle, L. W.

    2014-12-01

    Detection of in situ biosignatures on terrestrial and planetary missions is becoming increasingly more important. Missions that target the Earth's deep biosphere, Mars, moons of Jupiter (including Europa), moons of Saturn (Titan and Enceladus), and small bodies such as asteroids or comets require methods that enable detection of materials for both in-situ analysis that preserve context and as a means to select high priority sample for return to Earth. In situ instrumentation for biosignature detection spans a wide range of analytical and spectroscopic methods that capitalize on amino acid distribution, chirality, lipid composition, isotopic fractionation, or textures that persist in the environment. Many of the existing analytical instruments are bulk analysis methods and while highly sensitive, these require sample acquisition and sample processing. However, by combining with triaging spectroscopic methods, biosignatures can be targeted on a surface and preserve spatial context (including mineralogy, textures, and organic distribution). To provide spatially correlated chemical analysis at multiple spatial scales (meters to microns) we have employed a dual spectroscopic approach that capitalizes on high sensitivity deep UV native fluorescence detection and high specificity deep UV Raman analysis.. Recently selected as a payload on the Mars 2020 mission, SHERLOC incorporates these optical methods for potential biosignatures detection on Mars. We present data from both Earth analogs that operate as our only examples known biosignatures and meteorite samples that provide an example of abiotic organic formation, and demonstrate how provenance effects the spatial distribution and composition of organics.

  4. Spatial pattern recognition of seismic events in South West Colombia

    NASA Astrophysics Data System (ADS)

    Benítez, Hernán D.; Flórez, Juan F.; Duque, Diana P.; Benavides, Alberto; Lucía Baquero, Olga; Quintero, Jiber

    2013-09-01

    Recognition of seismogenic zones in geographical regions supports seismic hazard studies. This recognition is usually based on visual, qualitative and subjective analysis of data. Spatial pattern recognition provides a well founded means to obtain relevant information from large amounts of data. The purpose of this work is to identify and classify spatial patterns in instrumental data of the South West Colombian seismic database. In this research, clustering tendency analysis validates whether seismic database possesses a clustering structure. A non-supervised fuzzy clustering algorithm creates groups of seismic events. Given the sensitivity of fuzzy clustering algorithms to centroid initial positions, we proposed a methodology to initialize centroids that generates stable partitions with respect to centroid initialization. As a result of this work, a public software tool provides the user with the routines developed for clustering methodology. The analysis of the seismogenic zones obtained reveals meaningful spatial patterns in South-West Colombia. The clustering analysis provides a quantitative location and dispersion of seismogenic zones that facilitates seismological interpretations of seismic activities in South West Colombia.

  5. Reliability of Waveform Analysis as an Adjunct to Loss of Resistance for Thoracic Epidural Blocks.

    PubMed

    Leurcharusmee, Prangmalee; Arnuntasupakul, Vanlapa; Chora De La Garza, Daniel; Vijitpavan, Amorn; Ah-Kye, Sonia; Saelao, Abhidej; Tiyaprasertkul, Worakamol; Finlayson, Roderick J; Tran, De Q H

    2015-01-01

    The epidural space is most commonly identified with loss of resistance (LOR). Although sensitive, LOR lacks specificity, as cysts in interspinous ligaments, gaps in ligamentum flavum, paravertebral muscles, thoracic paravertebral spaces, and intermuscular planes can yield nonepidural LOR. Epidural waveform analysis (EWA) provides a simple confirmatory adjunct for LOR. When the needle is correctly positioned inside the epidural space, measurement of the pressure at its tip results in a pulsatile waveform. In this observational study, we set out to assess the sensitivity, specificity, as well as positive and negative predictive values of EWA for thoracic epidural blocks. We enrolled a convenience sample of 160 patients undergoing thoracic epidural blocks for thoracic surgery, abdominal surgery, or rib fractures. The choice of patient position (sitting or lateral decubitus), approach (midline or paramedian), and LOR medium (air or normal saline) was left to the operator (attending anesthesiologist, fellow, or resident). After obtaining a satisfactory LOR, the operator injected 5 mL of normal saline through the epidural needle. A sterile tubing, connected to a pressure transducer, was attached to the needle to measure the pressure at the needle tip. A 4-mL bolus of lidocaine 2% with epinephrine 5 μg/mL was then administered and, after 10 minutes, the patient was assessed for sensory blockade to ice. The failure rate (incorrect identification of the epidural space with LOR) was 23.1%. Of these 37 failed epidural blocks, 27 provided no sensory anesthesia at 10 minutes. In 10 subjects, the operator was unable to thread the catheter through the needle. When compared with the ice test, the sensitivity, specificity, and positive and negative predictive values of EWA were 91.1%, 83.8%, 94.9%, and 73.8%, respectively. Epidural waveform analysis (with pressure transduction through the needle) provides a simple adjunct to LOR for thoracic epidural blocks. Although its use was devoid of complications, further confirmatory studies are required before its routine implementation in clinical practice.

  6. Spectrotemporal Modulation Sensitivity as a Predictor of Speech Intelligibility for Hearing-Impaired Listeners

    PubMed Central

    Bernstein, Joshua G.W.; Mehraei, Golbarg; Shamma, Shihab; Gallun, Frederick J.; Theodoroff, Sarah M.; Leek, Marjorie R.

    2014-01-01

    Background A model that can accurately predict speech intelligibility for a given hearing-impaired (HI) listener would be an important tool for hearing-aid fitting or hearing-aid algorithm development. Existing speech-intelligibility models do not incorporate variability in suprathreshold deficits that are not well predicted by classical audiometric measures. One possible approach to the incorporation of such deficits is to base intelligibility predictions on sensitivity to simultaneously spectrally and temporally modulated signals. Purpose The likelihood of success of this approach was evaluated by comparing estimates of spectrotemporal modulation (STM) sensitivity to speech intelligibility and to psychoacoustic estimates of frequency selectivity and temporal fine-structure (TFS) sensitivity across a group of HI listeners. Research Design The minimum modulation depth required to detect STM applied to an 86 dB SPL four-octave noise carrier was measured for combinations of temporal modulation rate (4, 12, or 32 Hz) and spectral modulation density (0.5, 1, 2, or 4 cycles/octave). STM sensitivity estimates for individual HI listeners were compared to estimates of frequency selectivity (measured using the notched-noise method at 500, 1000measured using the notched-noise method at 500, 2000, and 4000 Hz), TFS processing ability (2 Hz frequency-modulation detection thresholds for 500, 10002 Hz frequency-modulation detection thresholds for 500, 2000, and 4000 Hz carriers) and sentence intelligibility in noise (at a 0 dB signal-to-noise ratio) that were measured for the same listeners in a separate study. Study Sample Eight normal-hearing (NH) listeners and 12 listeners with a diagnosis of bilateral sensorineural hearing loss participated. Data Collection and Analysis STM sensitivity was compared between NH and HI listener groups using a repeated-measures analysis of variance. A stepwise regression analysis compared STM sensitivity for individual HI listeners to audiometric thresholds, age, and measures of frequency selectivity and TFS processing ability. A second stepwise regression analysis compared speech intelligibility to STM sensitivity and the audiogram-based Speech Intelligibility Index. Results STM detection thresholds were elevated for the HI listeners, but only for low rates and high densities. STM sensitivity for individual HI listeners was well predicted by a combination of estimates of frequency selectivity at 4000 Hz and TFS sensitivity at 500 Hz but was unrelated to audiometric thresholds. STM sensitivity accounted for an additional 40% of the variance in speech intelligibility beyond the 40% accounted for by the audibility-based Speech Intelligibility Index. Conclusions Impaired STM sensitivity likely results from a combination of a reduced ability to resolve spectral peaks and a reduced ability to use TFS information to follow spectral-peak movements. Combining STM sensitivity estimates with audiometric threshold measures for individual HI listeners provided a more accurate prediction of speech intelligibility than audiometric measures alone. These results suggest a significant likelihood of success for an STM-based model of speech intelligibility for HI listeners. PMID:23636210

  7. [Analysis on the antimicrobial resistance of lactic acid bacteria isolated from the yogurt sold in China].

    PubMed

    Fan, Qin; Liu, Shuliang; Li, Juan; Huang, Tingting

    2012-05-01

    To analyze the antimicrobial susceptibility of lactic acid bacteria (LAB) from yogurt, and to provide references for evaluating the safety of LAB and screening safe strains. The sensitivity of 43 LAB strains, including 14 strains of Streptococcus thermophilus, 12 strains of Lactobacillus acidophilus, 9 strains of Lactobacillus bulgaricus and 8 strains of Bifidobacterium, to 22 antibiotics were tested by agar plate dilution method. All 43 LAB strains were resistant to trimethoprim, nalidixic acid, ciprofloxacin, lomefloxacin, danofloxacin and polymyxin E. Their resistances to kanamycin, tetracycline, clindamycin, doxycycline and cephalothin were varied. The sensitivity to other antibiotics were sensitive or moderate. All isolates were multidrug-resistant. The antimicrobial resistance of tested LAB strains was comparatively serious, and continuously monitoring their antimicrobial resistance and evaluating their safety should be strengthened.

  8. Novel approach based on one-tube nested PCR and a lateral flow strip for highly sensitive diagnosis of tuberculous meningitis.

    PubMed

    Sun, Yajuan; Chen, Jiajun; Li, Jia; Xu, Yawei; Jin, Hui; Xu, Na; Yin, Rui; Hu, Guohua

    2017-01-01

    Rapid and sensitive detection of Mycobacterium tuberculosis (M. Tb) in cerebrospinal fluid is crucial in the diagnosis of tuberculous meningitis (TBM), but conventional diagnostic technologies have limited sensitivity and specificity or are time-consuming. In this work, a novel, highly sensitive molecular diagnostic method, one-tube nested PCR-lateral flow strip test (OTNPCR-LFST), was developed for detecting M. tuberculosis. This one-tube nested PCR maintains the sensitivity of conventional two-step nested PCR and reduces both the chance of cross-contamination and the time required for analysis. The PCR product was detected by a lateral flow strip assay, which provided a basis for migration of the test to a point-of-care (POC) microfluidic format. The developed assay had an improved sensitivity compared with traditional PCR, and the limit of detection was up to 1 fg DNA isolated from M. tuberculosis. The assay was also specific for M. tuberculosis, and no cross-reactions were found in other non-target bacteria. The application of this technique to clinical samples was successfully evaluated, and OTNPCR-LFST showed 89% overall sensitivity and 100% specificity for TBM patients. This one-tube nested PCR-lateral flow strip assay is useful for detecting M. tuberculosis in TBM due to its rapidity, high sensitivity and simple manipulation.

  9. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  10. The contribution of reinforcement sensitivity to the personality-psychopathology hierarchical structure in childhood and adolescence.

    PubMed

    Slobodskaya, Helena R

    2016-11-01

    This study examined the contribution of reinforcement sensitivity to the hierarchical structure of child personality and common psychopathology in community samples of parent reports of children aged 2-18 (N = 968) and self-reports of adolescents aged 10-18 (N = 1,543) using the Inventory of Child Individual Differences-Short version (ICID-S), the Strengths and Difficulties Questionnaire (SDQ), and the Sensitivity to Punishment and Sensitivity to Reward Questionnaire (SPSRQ). A joint higher-order factor analysis of the ICID-S and SDQ scales suggested a 4-factor solution; congruence coefficients indicated replicability of the factors across the 2 samples at all levels of the personality-psychopathology hierarchy. The canonical correlation analyses indicated that reinforcement sensitivity and personality-psychopathology dimensions shared much of their variance. The main contribution of reinforcement sensitivity was through opposing effects of reward and punishment sensitivities. The superordinate factors Beta and Internalizing were best predicted by reinforcement sensitivity, followed by the Externalizing and Positive personality factors. These findings provide evidence for consistency of the hierarchical structure of personality and common psychopathology across informants and highlight the role of reinforcement systems in the development of normal and abnormal patterns of behavior and affect. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. Modern Material Analysis Instruments Add a New Dimension to Materials Characterization and Failure Analysis

    NASA Technical Reports Server (NTRS)

    Panda, Binayak

    2009-01-01

    Modern analytical tools can yield invaluable results during materials characterization and failure analysis. Scanning electron microscopes (SEMs) provide significant analytical capabilities, including angstrom-level resolution. These systems can be equipped with a silicon drift detector (SDD) for very fast yet precise analytical mapping of phases, as well as electron back-scattered diffraction (EBSD) units to map grain orientations, chambers that admit large samples, variable pressure for wet samples, and quantitative analysis software to examine phases. Advanced solid-state electronics have also improved surface and bulk analysis instruments: Secondary ion mass spectroscopy (SIMS) can quantitatively determine and map light elements such as hydrogen, lithium, and boron - with their isotopes. Its high sensitivity detects impurities at parts per billion (ppb) levels. X-ray photo-electron spectroscopy (XPS) can determine oxidation states of elements, as well as identifying polymers and measuring film thicknesses on coated composites. This technique is also known as electron spectroscopy for chemical analysis (ESCA). Scanning Auger electron spectroscopy (SAM) combines surface sensitivity, spatial lateral resolution (10 nm), and depth profiling capabilities to describe elemental compositions of near and below surface regions down to the chemical state of an atom.

  12. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  13. Identification of suitable genes contributes to lung adenocarcinoma clustering by multiple meta-analysis methods.

    PubMed

    Yang, Ze-Hui; Zheng, Rui; Gao, Yuan; Zhang, Qiang

    2016-09-01

    With the widespread application of high-throughput technology, numerous meta-analysis methods have been proposed for differential expression profiling across multiple studies. We identified the suitable differentially expressed (DE) genes that contributed to lung adenocarcinoma (ADC) clustering based on seven popular multiple meta-analysis methods. Seven microarray expression profiles of ADC and normal controls were extracted from the ArrayExpress database. The Bioconductor was used to perform the data preliminary preprocessing. Then, DE genes across multiple studies were identified. Hierarchical clustering was applied to compare the classification performance for microarray data samples. The classification efficiency was compared based on accuracy, sensitivity and specificity. Across seven datasets, 573 ADC cases and 222 normal controls were collected. After filtering out unexpressed and noninformative genes, 3688 genes were remained for further analysis. The classification efficiency analysis showed that DE genes identified by sum of ranks method separated ADC from normal controls with the best accuracy, sensitivity and specificity of 0.953, 0.969 and 0.932, respectively. The gene set with the highest classification accuracy mainly participated in the regulation of response to external stimulus (P = 7.97E-04), cyclic nucleotide-mediated signaling (P = 0.01), regulation of cell morphogenesis (P = 0.01) and regulation of cell proliferation (P = 0.01). Evaluation of DE genes identified by different meta-analysis methods in classification efficiency provided a new perspective to the choice of the suitable method in a given application. Varying meta-analysis methods always present varying abilities, so synthetic consideration should be taken when providing meta-analysis methods for particular research. © 2015 John Wiley & Sons Ltd.

  14. Systematic analysis of Ca2+ homeostasis in Saccharomyces cerevisiae based on chemical-genetic interaction profiles

    PubMed Central

    Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu

    2017-01-01

    We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553

  15. Margin and sensitivity methods for security analysis of electric power systems

    NASA Astrophysics Data System (ADS)

    Greene, Scott L.

    Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.

  16. [Cost-benefit analysis of the active screening of pulmonary tuberculosis in a recluse population entering prison].

    PubMed

    Martín, V; Domínguez, A; Alcaide, J

    1997-01-01

    In spanish prisons, tuberculosis is a serious problem of public health and health authorities don't take it seriously. To prove the efficiency of pulmonary tuberculosis case-finding on arrival at prison in order to get location resources in this activity. Cost-benefit analysis of a case-finding program compared with to wait for diagnostic to illness. The sensitivity of test was fixed in 80% and the specificity in 99.99%. The cost was based on market prices. Sensitivity analysis was done in every variables as well as tridimensional analysis in those one of more influence. The case-finding was efficient on prevalences of tuberculosis over 5 per mil. Its efficiency was hardly affected by discount social rates or the sensitivity of diagnostic tests. The prevalence of illness, the cost of diagnostic activities as well as the success of treatment and the specificity of diagnostic tests used had as influence on the efficiency model. The tridimensional analysis proved that the case-finding of pulmonary tuberculosis has efficiency on low prevalences (1 per thousand), provided the number of people cured is a 5% higher than the alternative one and the costs of case-finding less than 1,000 pesetas per subject. The case-finding pulmonary tuberculosis on arrival at prisons is of high efficiency. In a cost-opportunity situation (location of available resources, penitentiary and extrapenitentiary) the program is very efficacious taking into account the fact of higher prevalence of pulmonary tuberculosis in this people.

  17. Financial analysis of technology acquisition using fractionated lasers as a model.

    PubMed

    Jutkowitz, Eric; Carniol, Paul J; Carniol, Alan R

    2010-08-01

    Ablative fractional lasers are among the most advanced and costly devices on the market. Yet, there is a dearth of published literature on the cost and potential return on investment (ROI) of such devices. The objective of this study was to provide a methodological framework for physicians to evaluate ROI. To facilitate this analysis, we conducted a case study on the potential ROI of eight ablative fractional lasers. In the base case analysis, a 5-year lease and a 3-year lease were assumed as the purchase option with a $0 down payment and 3-month payment deferral. In addition to lease payments, service contracts, labor cost, and disposables were included in the total cost estimate. Revenue was estimated as price per procedure multiplied by total number of procedures in a year. Sensitivity analyses were performed to account for variability in model assumptions. Based on the assumptions of the model, all lasers had higher ROI under the 5-year lease agreement compared with that for the 3-year lease agreement. When comparing results between lasers, those with lower operating and purchase cost delivered a higher ROI. Sensitivity analysis indicates the model is most sensitive to purchase method. If physicians opt to purchase the device rather than lease, they can significantly enhance ROI. ROI analysis is an important tool for physicians who are considering making an expensive device acquisition. However, physicians should not rely solely on ROI and must also consider the clinical benefits of a laser. (c) Thieme Medical Publishers.

  18. The child health exposure analysis resource as a vehicle to measure environment in the environmental influences on child health outcomes program.

    PubMed

    Wright, Robert O; Teitelbaum, Susan; Thompson, Claudia; Balshaw, David

    2018-04-01

    Demonstrate the role of environment as a predictor of child health. The children's health exposure analysis resource (CHEAR) assists the Environmental influences on child health outcomes (ECHO) program in understanding the time sensitive and dynamic nature of perinatal and childhood environment on developmental trajectories by providing a central infrastructure for the analysis of biological samples from the ECHO cohort awards. CHEAR will assist ECHO cohorts in defining the critical or sensitive period for effects associated with environmental exposures. Effective incorporation of these principles into multiple existing cohorts requires extensive multidisciplinary expertise, creativity, and flexibility. The pursuit of life course - informed research within the CHEAR/ECHO structure represents a shift in focus from single exposure inquiries to one that addresses multiple environmental risk factors linked through shared vulnerabilities. CHEAR provides ECHO both targeted analyses of inorganic and organic toxicants, nutrients, and social-stress markers and untargeted analyses to assess the exposome and discovery of exposure-outcome relationships. Utilization of CHEAR as a single site for characterization of environmental exposures within the ECHO cohorts will not only support the investigation of the influence of environment on children's health but also support the harmonization of data across the disparate cohorts that comprise ECHO.

  19. Nuclear Thermal Propulsion Mars Mission Systems Analysis and Requirements Definition

    NASA Technical Reports Server (NTRS)

    Mulqueen, Jack; Chiroux, Robert C.; Thomas, Dan; Crane, Tracie

    2007-01-01

    This paper describes the Mars transportation vehicle design concepts developed by the Marshall Space Flight Center (MSFC) Advanced Concepts Office. These vehicle design concepts provide an indication of the most demanding and least demanding potential requirements for nuclear thermal propulsion systems for human Mars exploration missions from years 2025 to 2035. Vehicle concept options vary from large "all-up" vehicle configurations that would transport all of the elements for a Mars mission on one vehicle. to "split" mission vehicle configurations that would consist of separate smaller vehicles that would transport cargo elements and human crew elements to Mars separately. Parametric trades and sensitivity studies show NTP stage and engine design options that provide the best balanced set of metrics based on safety, reliability, performance, cost and mission objectives. Trade studies include the sensitivity of vehicle performance to nuclear engine characteristics such as thrust, specific impulse and nuclear reactor type. Tbe associated system requirements are aligned with the NASA Exploration Systems Mission Directorate (ESMD) Reference Mars mission as described in the Explorations Systems Architecture Study (ESAS) report. The focused trade studies include a detailed analysis of nuclear engine radiation shield requirements for human missions and analysis of nuclear thermal engine design options for the ESAS reference mission.

  20. FHR patterns that become significant in connection with ST waveform changes and metabolic acidosis at birth.

    PubMed

    Rosén, Karl G; Norén, Håkan; Carlsson, Ann

    2018-04-18

    Recent developments have produced new CTG classification systems and the question is to what extent these may affect the model of FHR + ST interpretation? The two new systems (FIGO2015 and SSOG2017) classify FHR + ST events differently from the current CTG classification system used in the STAN interpretation algorithm (STAN2007). Identify the predominant FHR patterns in connection with ST events in cases of cord artery metabolic acidosis missed by the different CTG classification systems. Indicate to what extent STAN clinical guidelines could be modified enhancing the sensitivity. Provide a pathophysiological rationale. Forty-four cases with umbilical cord artery metabolic acidosis were retrieved from a European multicenter database. Significant FHR + ST events were evaluated post hoc in consensus by an expert panel. Eighteen cases were not identified as in need of intervention and regarded as negative in the sensitivity analysis. In 12 cases, ST changes occurred but the CTG was regarded as reassuring. Visual analysis of the FHR + ST tracings revealed specific FHR patterns: Conclusion: These findings indicate FHR + ST analysis may be undertaken regardless of CTG classification system provided there is a more physiologically oriented approach to FHR assessment in connection with an ST event.

Top