Sample records for performing sensitivity analysis

  1. Multidisciplinary design optimization using multiobjective formulation techniques

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Pagaldipti, Narayanan S.

    1995-01-01

    This report addresses the development of a multidisciplinary optimization procedure using an efficient semi-analytical sensitivity analysis technique and multilevel decomposition for the design of aerospace vehicles. A semi-analytical sensitivity analysis procedure is developed for calculating computational grid sensitivities and aerodynamic design sensitivities. Accuracy and efficiency of the sensitivity analysis procedure is established through comparison of the results with those obtained using a finite difference technique. The developed sensitivity analysis technique are then used within a multidisciplinary optimization procedure for designing aerospace vehicles. The optimization problem, with the integration of aerodynamics and structures, is decomposed into two levels. Optimization is performed for improved aerodynamic performance at the first level and improved structural performance at the second level. Aerodynamic analysis is performed by solving the three-dimensional parabolized Navier Stokes equations. A nonlinear programming technique and an approximate analysis procedure are used for optimization. The proceduredeveloped is applied to design the wing of a high speed aircraft. Results obtained show significant improvements in the aircraft aerodynamic and structural performance when compared to a reference or baseline configuration. The use of the semi-analytical sensitivity technique provides significant computational savings.

  2. The Diagnostic Performance of Stool DNA Testing for Colorectal Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Zhai, Rong-Lin; Xu, Fei; Zhang, Pei; Zhang, Wan-Li; Wang, Hui; Wang, Ji-Liang; Cai, Kai-Lin; Long, Yue-Ping; Lu, Xiao-Ming; Tao, Kai-Xiong; Wang, Guo-Bin

    2016-02-01

    This meta-analysis was designed to evaluate the diagnostic performance of stool DNA testing for colorectal cancer (CRC) and compare the performance between single-gene and multiple-gene tests.MEDLINE, Cochrane, EMBASE databases were searched using keywords colorectal cancers, stool/fecal, sensitivity, specificity, DNA, and screening. Sensitivity analysis, quality assessments, and performance bias were performed for the included studies.Fifty-three studies were included in the analysis with a total sample size of 7524 patients. The studies were heterogeneous with regard to the genes being analyzed for fecal genetic biomarkers of CRC, as well as the laboratory methods being used for each assay. The sensitivity of the different assays ranged from 2% to 100% and the specificity ranged from 81% to 100%. The meta-analysis found that the pooled sensitivities for single- and multigene assays were 48.0% and 77.8%, respectively, while the pooled specificities were 97.0% and 92.7%. Receiver operator curves and diagnostic odds ratios showed no significant difference between both tests with regard to sensitivity or specificity.This meta-analysis revealed that using assays that evaluated multiple genes compared with single-gene assays did not increase the sensitivity or specificity of stool DNA testing in detecting CRC.

  3. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  4. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  5. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  6. Stability, performance and sensitivity analysis of I.I.D. jump linear systems

    NASA Astrophysics Data System (ADS)

    Chávez Fuentes, Jorge R.; González, Oscar R.; Gray, W. Steven

    2018-06-01

    This paper presents a symmetric Kronecker product analysis of independent and identically distributed jump linear systems to develop new, lower dimensional equations for the stability and performance analysis of this type of systems than what is currently available. In addition, new closed form expressions characterising multi-parameter relative sensitivity functions for performance metrics are introduced. The analysis technique is illustrated with a distributed fault-tolerant flight control example where the communication links are allowed to fail randomly.

  7. A Sensitivity Analysis of the Rigid Pavement Life-Cycle Cost Analysis Program

    DOT National Transportation Integrated Search

    2000-12-01

    Original Report Date: September 1999. This report describes the sensitivity analysis performed on the Rigid Pavement Life-Cycle Cost Analysis program, a computer program developed by the Center for Transportation Research for the Texas Department of ...

  8. Sensitivity analysis, approximate analysis, and design optimization for internal and external viscous flows

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.

    1991-01-01

    A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.

  9. Performance sensitivity analysis of Department of Energy-Chrysler upgraded automotive gas turbine engine, S/N 5-4

    NASA Technical Reports Server (NTRS)

    Johnsen, R. L.

    1979-01-01

    The performance sensitivity of a two-shaft automotive gas turbine engine to changes in component performance and cycle operating parameters was examined. Sensitivities were determined for changes in turbomachinery efficiency, compressor inlet temperature, power turbine discharge temperature, regenerator effectiveness, regenerator pressure drop, and several gas flow and heat leaks. Compressor efficiency was found to have the greatest effect on system performance.

  10. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  11. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  12. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  13. SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements

    NASA Technical Reports Server (NTRS)

    Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.

    1977-01-01

    A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.

  14. Automated Sensitivity Analysis of Interplanetary Trajectories

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno

    2017-01-01

    This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.

  15. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  16. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  17. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  18. Modeling Canadian Quality Control Test Program for Steroid Hormone Receptors in Breast Cancer: Diagnostic Accuracy Study.

    PubMed

    Pérez, Teresa; Makrestsov, Nikita; Garatt, John; Torlakovic, Emina; Gilks, C Blake; Mallett, Susan

    The Canadian Immunohistochemistry Quality Control program monitors clinical laboratory performance for estrogen receptor and progesterone receptor tests used in breast cancer treatment management in Canada. Current methods assess sensitivity and specificity at each time point, compared with a reference standard. We investigate alternative performance analysis methods to enhance the quality assessment. We used 3 methods of analysis: meta-analysis of sensitivity and specificity of each laboratory across all time points; sensitivity and specificity at each time point for each laboratory; and fitting models for repeated measurements to examine differences between laboratories adjusted by test and time point. Results show 88 laboratories participated in quality control at up to 13 time points using typically 37 to 54 histology samples. In meta-analysis across all time points no laboratories have sensitivity or specificity below 80%. Current methods, presenting sensitivity and specificity separately for each run, result in wide 95% confidence intervals, typically spanning 15% to 30%. Models of a single diagnostic outcome demonstrated that 82% to 100% of laboratories had no difference to reference standard for estrogen receptor and 75% to 100% for progesterone receptor, with the exception of 1 progesterone receptor run. Laboratories with significant differences to reference standard identified with Generalized Estimating Equation modeling also have reduced performance by meta-analysis across all time points. The Canadian Immunohistochemistry Quality Control program has a good design, and with this modeling approach has sufficient precision to measure performance at each time point and allow laboratories with a significantly lower performance to be targeted for advice.

  19. Automated Sensitivity Analysis of Interplanetary Trajectories for Optimal Mission Design

    NASA Technical Reports Server (NTRS)

    Knittel, Jeremy; Hughes, Kyle; Englander, Jacob; Sarli, Bruno

    2017-01-01

    This work describes a suite of Python tools known as the Python EMTG Automated Trade Study Application (PEATSA). PEATSA was written to automate the operation of trajectory optimization software, simplify the process of performing sensitivity analysis, and was ultimately found to out-perform a human trajectory designer in unexpected ways. These benefits will be discussed and demonstrated on sample mission designs.

  20. Preliminary Thermal-Mechanical Sizing of Metallic TPS: Process Development and Sensitivity Studies

    NASA Technical Reports Server (NTRS)

    Poteet, Carl C.; Abu-Khajeel, Hasan; Hsu, Su-Yuen

    2002-01-01

    The purpose of this research was to perform sensitivity studies and develop a process to perform thermal and structural analysis and sizing of the latest Metallic Thermal Protection System (TPS) developed at NASA LaRC (Langley Research Center). Metallic TPS is a key technology for reducing the cost of reusable launch vehicles (RLV), offering the combination of increased durability and competitive weights when compared to other systems. Accurate sizing of metallic TPS requires combined thermal and structural analysis. Initial sensitivity studies were conducted using transient one-dimensional finite element thermal analysis to determine the influence of various TPS and analysis parameters on TPS weight. The thermal analysis model was then used in combination with static deflection and failure mode analysis of the sandwich panel outer surface of the TPS to obtain minimum weight TPS configurations at three vehicle stations on the windward centerline of a representative RLV. The coupled nature of the analysis requires an iterative analysis process, which will be described herein. Findings from the sensitivity analysis are reported, along with TPS designs at the three RLV vehicle stations considered.

  1. 3.8 Proposed approach to uncertainty quantification and sensitivity analysis in the next PA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, Greg; Wohlwend, Jen

    2017-10-02

    This memorandum builds upon Section 3.8 of SRNL (2016) and Flach (2017) by defining key error analysis, uncertainty quantification, and sensitivity analysis concepts and terms, in preparation for the next E-Area Performance Assessment (WSRC 2008) revision.

  2. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  3. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  4. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  5. Sensitivity analyses of stopping distance for connected vehicles at active highway-rail grade crossings.

    PubMed

    Hsu, Chung-Jen; Jones, Elizabeth G

    2017-02-01

    This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  7. Quantitative performance targets by using balanced scorecard system: application to waste management and public administration.

    PubMed

    Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau

    2014-09-01

    This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.

  8. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  9. Shape design sensitivity analysis and optimal design of structural systems

    NASA Technical Reports Server (NTRS)

    Choi, Kyung K.

    1987-01-01

    The material derivative concept of continuum mechanics and an adjoint variable method of design sensitivity analysis are used to relate variations in structural shape to measures of structural performance. A domain method of shape design sensitivity analysis is used to best utilize the basic character of the finite element method that gives accurate information not on the boundary but in the domain. Implementation of shape design sensitivty analysis using finite element computer codes is discussed. Recent numerical results are used to demonstrate the accuracy obtainable using the method. Result of design sensitivity analysis is used to carry out design optimization of a built-up structure.

  10. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  11. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  12. Integrated Sensitivity Analysis Workflow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman-Hill, Ernest J.; Hoffman, Edward L.; Gibson, Marcus J.

    2014-08-01

    Sensitivity analysis is a crucial element of rigorous engineering analysis, but performing such an analysis on a complex model is difficult and time consuming. The mission of the DART Workbench team at Sandia National Laboratories is to lower the barriers to adoption of advanced analysis tools through software integration. The integrated environment guides the engineer in the use of these integrated tools and greatly reduces the cycle time for engineering analysis.

  13. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.

  14. Sensitivity of VIIRS Polarization Measurements

    NASA Technical Reports Server (NTRS)

    Waluschka, Eugene

    2010-01-01

    The design of an optical system typically involves a sensitivity analysis where the various lens parameters, such as lens spacing and curvatures, to name two parameters, are (slightly) varied to see what, if any, effect this has on the performance and to establish manufacturing tolerances. A sinular analysis was performed for the VIIRS instruments polarization measurements to see how real world departures from perfectly linearly polarized light entering VIIRS effects the polarization measurement. The methodology and a few of the results of this polarization sensitivity analysis are presented and applied to the construction of a single polarizer which will cover the VIIRS VIS/NIR spectral range. Keywords: VIIRS, polarization, ray, trace; polarizers, Bolder Vision, MOXTEK

  15. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  16. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    DTIC Science & Technology

    2015-03-16

    sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity Analysis of the Reduced Order Coagulation...sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the performance of the reduced order model [69]. We...Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates

  17. Parameter sensitivity analysis for pesticide impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and linear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed that simulate hive population trajectories, taking into account queen strength, foraging success, weather, colo...

  18. Sobol’ sensitivity analysis for stressor impacts on honeybee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather...

  19. Receiver operating characteristic analysis of age-related changes in lineup performance.

    PubMed

    Humphries, Joyce E; Flowe, Heather D

    2015-04-01

    In the basic face memory literature, support has been found for the late maturation hypothesis, which holds that face recognition ability is not fully developed until at least adolescence. Support for the late maturation hypothesis in the criminal lineup identification literature, however, has been equivocal because of the analytic approach that has been used to examine age-related changes in identification performance. Recently, receiver operator characteristic (ROC) analysis was applied for the first time in the adult eyewitness memory literature to examine whether memory sensitivity differs across different types of lineup tests. ROC analysis allows for the separation of memory sensitivity from response bias in the analysis of recognition data. Here, we have made the first ROC-based comparison of adults' and children's (5- and 6-year-olds and 9- and 10-year-olds) memory performance on lineups by reanalyzing data from Humphries, Holliday, and Flowe (2012). In line with the late maturation hypothesis, memory sensitivity was significantly greater for adults compared with young children. Memory sensitivity for older children was similar to that for adults. The results indicate that the late maturation hypothesis can be generalized to account for age-related performance differences on an eyewitness memory task. The implications for developmental eyewitness memory research are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Sensitivity Analysis of ProSEDS (Propulsive Small Expendable Deployer System) Data Communication System

    NASA Technical Reports Server (NTRS)

    Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.

    1999-01-01

    This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.

  1. The Miniaturization and Reproducibilty of the Cylinder Expansion Test

    DTIC Science & Technology

    2011-10-01

    new miniaturized and the standard one-inch test has been performed using the liquid explosive PLX ( nitromethane sensitized with ethylene diamine). The...explosive PLX ( nitromethane sensitized with ethylene diamine). The resulting velocity and displacement profiles obtained from the streak records...performing a measurement systems analysis on both the half- and one-inch tests using the liquid explosive PLX ( nitromethane sensitized with 5% (by wt

  2. [Analysis and experimental verification of sensitivity and SNR of laser warning receiver].

    PubMed

    Zhang, Ji-Long; Wang, Ming; Tian, Er-Ming; Li, Xiao; Wang, Zhi-Bin; Zhang, Yue

    2009-01-01

    In order to countermeasure increasingly serious threat from hostile laser in modern war, it is urgent to do research on laser warning technology and system, and the sensitivity and signal to noise ratio (SNR) are two important performance parameters in laser warning system. In the present paper, based on the signal statistical detection theory, a method for calculation of the sensitivity and SNR in coherent detection laser warning receiver (LWR) has been proposed. Firstly, the probabilities of the laser signal and receiver noise were analyzed. Secondly, based on the threshold detection theory and Neyman-Pearson criteria, the signal current equation was established by introducing detection probability factor and false alarm rate factor, then, the mathematical expressions of sensitivity and SNR were deduced. Finally, by using method, the sensitivity and SNR of the sinusoidal grating laser warning receiver developed by our group were analyzed, and the theoretic calculation and experimental results indicate that the SNR analysis method is feasible, and can be used in performance analysis of LWR.

  3. Variational Methods in Sensitivity Analysis and Optimization for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Ibrahim, A. H.; Hou, G. J.-W.; Tiwari, S. N. (Principal Investigator)

    1996-01-01

    Variational methods (VM) sensitivity analysis, which is the continuous alternative to the discrete sensitivity analysis, is employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The determination of the sensitivity derivatives of the performance index or functional entails the coupled solutions of the state and costate equations. As the stable and converged numerical solution of the costate equations with their boundary conditions are a priori unknown, numerical stability analysis is performed on both the state and costate equations. Thereafter, based on the amplification factors obtained by solving the generalized eigenvalue equations, the stability behavior of the costate equations is discussed and compared with the state (Euler) equations. The stability analysis of the costate equations suggests that the converged and stable solution of the costate equation is possible only if the computational domain of the costate equations is transformed to take into account the reverse flow nature of the costate equations. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.

  4. Sensitivity analysis of a ground-water-flow model

    USGS Publications Warehouse

    Torak, Lynn J.; ,

    1991-01-01

    A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.

  5. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  6. Tonic blood pressure modulates the relationship between baroreceptor cardiac reflex sensitivity and cognitive performance.

    PubMed

    Del Paso, Gustavo A Reyes; González, M Isabel; Hernández, José Antonio; Duschek, Stefan; Gutiérrez, Nicolás

    2009-09-01

    This study explored the effects of tonic blood pressure on the association between baroreceptor cardiac reflex sensitivity and cognitive performance. Sixty female participants completed a mental arithmetic task. Baroreceptor reflex sensitivity was assessed using sequence analysis. An interaction was found, indicating that the relationship between baroreceptor reflex sensitivity and cognitive performance is modulated by blood pressure levels. Reflex sensitivity was inversely associated to performance indices in the subgroup of participants with systolic blood pressure above the mean, whereas the association was positive in participants with systolic values below the mean. These results are in accordance with the findings in the field of pain perception and suggest that tonic blood pressure modulates the inhibitory effects of baroreceptor stimulation on high central nervous functions.

  7. Calibration of a complex activated sludge model for the full-scale wastewater treatment plant.

    PubMed

    Liwarska-Bizukojc, Ewa; Olejnik, Dorota; Biernacki, Rafal; Ledakowicz, Stanislaw

    2011-08-01

    In this study, the results of the calibration of the complex activated sludge model implemented in BioWin software for the full-scale wastewater treatment plant are presented. Within the calibration of the model, sensitivity analysis of its parameters and the fractions of carbonaceous substrate were performed. In the steady-state and dynamic calibrations, a successful agreement between the measured and simulated values of the output variables was achieved. Sensitivity analysis revealed that upon the calculations of normalized sensitivity coefficient (S(i,j)) 17 (steady-state) or 19 (dynamic conditions) kinetic and stoichiometric parameters are sensitive. Most of them are associated with growth and decay of ordinary heterotrophic organisms and phosphorus accumulating organisms. The rankings of ten most sensitive parameters established on the basis of the calculations of the mean square sensitivity measure (δ(msqr)j) indicate that irrespective of the fact, whether the steady-state or dynamic calibration was performed, there is an agreement in the sensitivity of parameters.

  8. Development of Solution Algorithm and Sensitivity Analysis for Random Fuzzy Portfolio Selection Model

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Katagiri, Hideki

    2010-10-01

    This paper focuses on the proposition of a portfolio selection problem considering an investor's subjectivity and the sensitivity analysis for the change of subjectivity. Since this proposed problem is formulated as a random fuzzy programming problem due to both randomness and subjectivity presented by fuzzy numbers, it is not well-defined. Therefore, introducing Sharpe ratio which is one of important performance measures of portfolio models, the main problem is transformed into the standard fuzzy programming problem. Furthermore, using the sensitivity analysis for fuzziness, the analytical optimal portfolio with the sensitivity factor is obtained.

  9. Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing

    NASA Astrophysics Data System (ADS)

    Lin, Psang Dain; Lu, Chia-Hung

    2004-02-01

    Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.

  10. Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing

    NASA Astrophysics Data System (ADS)

    Dain Lin, Psang; Lu, Chia-Hung

    2004-02-01

    Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.

  11. Colonic lesion characterization in inflammatory bowel disease: A systematic review and meta-analysis

    PubMed Central

    Lord, Richard; Burr, Nicholas E; Mohammed, Noor; Subramanian, Venkataraman

    2018-01-01

    AIM To perform a systematic review and meta-analysis for the diagnostic accuracy of in vivo lesion characterization in colonic inflammatory bowel disease (IBD), using optical imaging techniques, including virtual chromoendoscopy (VCE), dye-based chromoendoscopy (DBC), magnification endoscopy and confocal laser endomicroscopy (CLE). METHODS We searched Medline, Embase and the Cochrane library. We performed a bivariate meta-analysis to calculate the pooled estimate sensitivities, specificities, positive and negative likelihood ratios (+LHR, -LHR), diagnostic odds ratios (DOR), and area under the SROC curve (AUSROC) for each technology group. A subgroup analysis was performed to investigate differences in real-time non-magnified Kudo pit patterns (with VCE and DBC) and real-time CLE. RESULTS We included 22 studies [1491 patients; 4674 polyps, of which 539 (11.5%) were neoplastic]. Real-time CLE had a pooled sensitivity of 91% (95%CI: 66%-98%), specificity of 97% (95%CI: 94%-98%), and an AUSROC of 0.98 (95%CI: 0.97-0.99). Magnification endoscopy had a pooled sensitivity of 90% (95%CI: 77%-96%) and specificity of 87% (95%CI: 81%-91%). VCE had a pooled sensitivity of 86% (95%CI: 62%-95%) and specificity of 87% (95%CI: 72%-95%). DBC had a pooled sensitivity of 67% (95%CI: 44%-84%) and specificity of 86% (95%CI: 72%-94%). CONCLUSION Real-time CLE is a highly accurate technology for differentiating neoplastic from non-neoplastic lesions in patients with colonic IBD. However, most CLE studies were performed by single expert users within tertiary centres, potentially confounding these results. PMID:29563760

  12. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  13. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  14. Differential impairments underlying decision making in anorexia nervosa and bulimia nervosa: a cognitive modeling analysis.

    PubMed

    Chan, Trista Wai Sze; Ahn, Woo-Young; Bates, John E; Busemeyer, Jerome R; Guillaume, Sebastien; Redgrave, Graham W; Danner, Unna N; Courtet, Philippe

    2014-03-01

    This study examined the underlying processes of decision-making impairments in individuals with anorexia nervosa (AN) and bulimia nervosa (BN). We deconstructed their performance on the widely used decision task, the Iowa Gambling Task (IGT) into cognitive, motivational, and response processes using cognitive modeling analysis. We hypothesized that IGT performance would be characterized by impaired memory functions and heightened punishment sensitivity in AN, and by elevated sensitivity to reward as opposed to punishment in BN. We analyzed trial-by-trial data of IGT obtained from 224 individuals: 94 individuals with AN, 63 with BN, and 67 healthy comparison individuals (HC). The prospect valence learning model was used to assess cognitive, motivational, and response processes underlying IGT performance. Individuals with AN showed marginally impaired IGT performance compared to HC. Their performance was characterized by impairments in memory functions. Individuals with BN showed significantly impaired IGT performance compared to HC. They showed greater relative sensitivity to gains as opposed to losses than HC. Memory functions in AN were positively correlated with body mass index. This study identified differential impairments underlying IGT performance in AN and BN. Findings suggest that impaired decision making in AN might involve impaired memory functions. Impaired decision making in BN might involve altered reward and punishment sensitivity. Copyright © 2013 Wiley Periodicals, Inc.

  15. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  16. Accurate evaluation of sensitivity for calibration between a LiDAR and a panoramic camera used for remote sensing

    NASA Astrophysics Data System (ADS)

    García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier

    2016-04-01

    Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.

  17. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  18. Sensitivity Analysis of Weather Variables on Offsite Consequence Analysis Tools in South Korea and the United States.

    PubMed

    Kim, Min-Uk; Moon, Kyong Whan; Sohn, Jong-Ryeul; Byeon, Sang-Hoon

    2018-05-18

    We studied sensitive weather variables for consequence analysis, in the case of chemical leaks on the user side of offsite consequence analysis (OCA) tools. We used OCA tools Korea Offsite Risk Assessment (KORA) and Areal Location of Hazardous Atmospheres (ALOHA) in South Korea and the United States, respectively. The chemicals used for this analysis were 28% ammonia (NH₃), 35% hydrogen chloride (HCl), 50% hydrofluoric acid (HF), and 69% nitric acid (HNO₃). The accident scenarios were based on leakage accidents in storage tanks. The weather variables were air temperature, wind speed, humidity, and atmospheric stability. Sensitivity analysis was performed using the Statistical Package for the Social Sciences (SPSS) program for dummy regression analysis. Sensitivity analysis showed that impact distance was not sensitive to humidity. Impact distance was most sensitive to atmospheric stability, and was also more sensitive to air temperature than wind speed, according to both the KORA and ALOHA tools. Moreover, the weather variables were more sensitive in rural conditions than in urban conditions, with the ALOHA tool being more influenced by weather variables than the KORA tool. Therefore, if using the ALOHA tool instead of the KORA tool in rural conditions, users should be careful not to cause any differences in impact distance due to input errors of weather variables, with the most sensitive one being atmospheric stability.

  19. [Meta-analysis of diagnostic capability of frequency-doubling technology (FDT) for primary glaucoma].

    PubMed

    Liu, Ting; He, Xiang-ge

    2006-05-01

    To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.

  20. Phase 1 of the near term hybrid passenger vehicle development program. Appendix D: Sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Traversi, M.

    1979-01-01

    Data are presented on the sensitivity of: (1) mission analysis results to the boundary values given for number of passenger cars and average annual vehicle miles traveled per car; (2) vehicle characteristics and performance to specifications; and (3) tradeoff study results to the expected parameters.

  1. Design sensitivity analysis using EAL. Part 1: Conventional design parameters

    NASA Technical Reports Server (NTRS)

    Dopker, B.; Choi, Kyung K.; Lee, J.

    1986-01-01

    A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.

  2. Parameterization of the InVEST Crop Pollination Model to spatially predict abundance of wild blueberry (Vaccinium angustifolium Aiton) native bee pollinators in Maine, USA

    USGS Publications Warehouse

    Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.

    2016-01-01

    Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.

  3. First- and Second-Order Sensitivity Analysis of a P-Version Finite Element Equation Via Automatic Differentiation

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    1998-01-01

    Sensitivity analysis is a technique for determining derivatives of system responses with respect to design parameters. Among many methods available for sensitivity analysis, automatic differentiation has been proven through many applications in fluid dynamics and structural mechanics to be an accurate and easy method for obtaining derivatives. Nevertheless, the method can be computational expensive and can require a high memory space. This project will apply an automatic differentiation tool, ADIFOR, to a p-version finite element code to obtain first- and second- order then-nal derivatives, respectively. The focus of the study is on the implementation process and the performance of the ADIFOR-enhanced codes for sensitivity analysis in terms of memory requirement, computational efficiency, and accuracy.

  4. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    NASA Astrophysics Data System (ADS)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  5. Imaging modalities for characterising focal pancreatic lesions.

    PubMed

    Best, Lawrence Mj; Rawji, Vishal; Pereira, Stephen P; Davidson, Brian R; Gurusamy, Kurinchi Selvan

    2017-04-17

    Increasing numbers of incidental pancreatic lesions are being detected each year. Accurate characterisation of pancreatic lesions into benign, precancerous, and cancer masses is crucial in deciding whether to use treatment or surveillance. Distinguishing benign lesions from precancerous and cancerous lesions can prevent patients from undergoing unnecessary major surgery. Despite the importance of accurately classifying pancreatic lesions, there is no clear algorithm for management of focal pancreatic lesions. To determine and compare the diagnostic accuracy of various imaging modalities in detecting cancerous and precancerous lesions in people with focal pancreatic lesions. We searched the CENTRAL, MEDLINE, Embase, and Science Citation Index until 19 July 2016. We searched the references of included studies to identify further studies. We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We planned to include studies reporting cross-sectional information on the index test (CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), EUS (endoscopic ultrasound), EUS elastography, and EUS-guided biopsy or FNA (fine-needle aspiration)) and reference standard (confirmation of the nature of the lesion was obtained by histopathological examination of the entire lesion by surgical excision, or histopathological examination for confirmation of precancer or cancer by biopsy and clinical follow-up of at least six months in people with negative index tests) in people with pancreatic lesions irrespective of language or publication status or whether the data were collected prospectively or retrospectively. Two review authors independently searched the references to identify relevant studies and extracted the data. We planned to use the bivariate analysis to calculate the summary sensitivity and specificity with their 95% confidence intervals and the hierarchical summary receiver operating characteristic (HSROC) to compare the tests and assess heterogeneity, but used simpler models (such as univariate random-effects model and univariate fixed-effect model) for combining studies when appropriate because of the sparse data. We were unable to compare the diagnostic performance of the tests using formal statistical methods because of sparse data. We included 54 studies involving a total of 3,196 participants evaluating the diagnostic accuracy of various index tests. In these 54 studies, eight different target conditions were identified with different final diagnoses constituting benign, precancerous, and cancerous lesions. None of the studies was of high methodological quality. None of the comparisons in which single studies were included was of sufficiently high methodological quality to warrant highlighting of the results. For differentiation of cancerous lesions from benign or precancerous lesions, we identified only one study per index test. The second analysis, of studies differentiating cancerous versus benign lesions, provided three tests in which meta-analysis could be performed. The sensitivities and specificities for diagnosing cancer were: EUS-FNA: sensitivity 0.79 (95% confidence interval (CI) 0.07 to 1.00), specificity 1.00 (95% CI 0.91 to 1.00); EUS: sensitivity 0.95 (95% CI 0.84 to 0.99), specificity 0.53 (95% CI 0.31 to 0.74); PET: sensitivity 0.92 (95% CI 0.80 to 0.97), specificity 0.65 (95% CI 0.39 to 0.84). The third analysis, of studies differentiating precancerous or cancerous lesions from benign lesions, only provided one test (EUS-FNA) in which meta-analysis was performed. EUS-FNA had moderate sensitivity for diagnosing precancerous or cancerous lesions (sensitivity 0.73 (95% CI 0.01 to 1.00) and high specificity 0.94 (95% CI 0.15 to 1.00), the extremely wide confidence intervals reflecting the heterogeneity between the studies). The fourth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (dysplasia) provided three tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing invasive carcinoma were: CT: sensitivity 0.72 (95% CI 0.50 to 0.87), specificity 0.92 (95% CI 0.81 to 0.97); EUS: sensitivity 0.78 (95% CI 0.44 to 0.94), specificity 0.91 (95% CI 0.61 to 0.98); EUS-FNA: sensitivity 0.66 (95% CI 0.03 to 0.99), specificity 0.92 (95% CI 0.73 to 0.98). The fifth analysis, of studies differentiating cancerous (high-grade dysplasia or invasive carcinoma) versus precancerous (low- or intermediate-grade dysplasia) provided six tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing cancer (high-grade dysplasia or invasive carcinoma) were: CT: sensitivity 0.87 (95% CI 0.00 to 1.00), specificity 0.96 (95% CI 0.00 to 1.00); EUS: sensitivity 0.86 (95% CI 0.74 to 0.92), specificity 0.91 (95% CI 0.83 to 0.96); EUS-FNA: sensitivity 0.47 (95% CI 0.24 to 0.70), specificity 0.91 (95% CI 0.32 to 1.00); EUS-FNA carcinoembryonic antigen 200 ng/mL: sensitivity 0.58 (95% CI 0.28 to 0.83), specificity 0.51 (95% CI 0.19 to 0.81); MRI: sensitivity 0.69 (95% CI 0.44 to 0.86), specificity 0.93 (95% CI 0.43 to 1.00); PET: sensitivity 0.90 (95% CI 0.79 to 0.96), specificity 0.94 (95% CI 0.81 to 0.99). The sixth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (low-grade dysplasia) provided no tests in which meta-analysis was performed. The seventh analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) provided two tests in which meta-analysis was performed. The sensitivity and specificity for diagnosing cancer were: CT: sensitivity 0.83 (95% CI 0.68 to 0.92), specificity 0.83 (95% CI 0.64 to 0.93) and MRI: sensitivity 0.80 (95% CI 0.58 to 0.92), specificity 0.81 (95% CI 0.53 to 0.95), respectively. The eighth analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) or benign lesions provided no test in which meta-analysis was performed.There were no major alterations in the subgroup analysis of cystic pancreatic focal lesions (42 studies; 2086 participants). None of the included studies evaluated EUS elastography or sequential testing. We were unable to arrive at any firm conclusions because of the differences in the way that study authors classified focal pancreatic lesions into cancerous, precancerous, and benign lesions; the inclusion of few studies with wide confidence intervals for each comparison; poor methodological quality in the studies; and heterogeneity in the estimates within comparisons.

  6. Relative performance of academic departments using DEA with sensitivity analysis.

    PubMed

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S P

    2009-05-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of academic programs based on multiple criteria. Keeping this in view, this paper attempts to evaluate the performance efficiencies of 19 academic departments of IIT Roorkee (India) through data envelopment analysis (DEA) technique. The technique has been used to assess the performance of academic institutions in a number of countries like USA, UK, Australia, etc. But we are using it first time in Indian context to the best of our knowledge. Applying DEA models, we calculate technical, pure technical and scale efficiencies and identify the reference sets for inefficient departments. Input and output projections are also suggested for inefficient departments to reach the frontier. Overall performance, research performance and teaching performance are assessed separately using sensitivity analysis.

  7. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  8. A Quad-Cantilevered Plate micro-sensor for intracranial pressure measurement.

    PubMed

    Lalkov, Vasko; Qasaimeh, Mohammad A

    2017-07-01

    This paper proposes a new design for pressure-sensing micro-plate platform to bring higher sensitivity to a pressure sensor based on piezoresistive MEMS sensing mechanism. The proposed design is composed of a suspended plate having four stepped cantilever beams connected to its corners, and thus defined as Quad-Cantilevered Plate (QCP). Finite element analysis was performed to determine the optimal design for sensitivity and structural stability under a range of applied forces. Furthermore, a piezoresistive analysis was performed to calculate sensor sensitivity. Both the maximum stress and the change in resistance of the piezoresistor associated with the QCP were found to be higher compared to previously published designs, and linearly related to the applied pressure as desired. Therefore, the QCP demonstrates greater sensitivity, and could be potentially used as an efficient pressure sensor for intracranial pressure measurement.

  9. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 2 - evaluation of rehabilitation fixes (part 2).

    DOT National Transportation Integrated Search

    2013-08-01

    The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...

  10. SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR

    EPA Science Inventory

    Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...

  11. Acceleration and sensitivity analysis of lattice kinetic Monte Carlo simulations using parallel processing and rate constant rescaling

    NASA Astrophysics Data System (ADS)

    Núñez, M.; Robie, T.; Vlachos, D. G.

    2017-10-01

    Kinetic Monte Carlo (KMC) simulation provides insights into catalytic reactions unobtainable with either experiments or mean-field microkinetic models. Sensitivity analysis of KMC models assesses the robustness of the predictions to parametric perturbations and identifies rate determining steps in a chemical reaction network. Stiffness in the chemical reaction network, a ubiquitous feature, demands lengthy run times for KMC models and renders efficient sensitivity analysis based on the likelihood ratio method unusable. We address the challenge of efficiently conducting KMC simulations and performing accurate sensitivity analysis in systems with unknown time scales by employing two acceleration techniques: rate constant rescaling and parallel processing. We develop statistical criteria that ensure sufficient sampling of non-equilibrium steady state conditions. Our approach provides the twofold benefit of accelerating the simulation itself and enabling likelihood ratio sensitivity analysis, which provides further speedup relative to finite difference sensitivity analysis. As a result, the likelihood ratio method can be applied to real chemistry. We apply our methodology to the water-gas shift reaction on Pt(111).

  12. Performance of screening questionnaires for obstructive sleep apnea during pregnancy: A systematic review and meta-analysis.

    PubMed

    Tantrakul, Visasiri; Numthavaj, Pawin; Guilleminault, Christian; McEvoy, Mark; Panburana, Panyu; Khaing, Win; Attia, John; Thakkinstian, Ammarin

    2017-12-01

    This review aims to evaluate the performance of obstructive sleep apnea (OSA) screening questionnaires during pregnancy. A systematic review and meta-analysis was performed using MEDLINE Scopus, CINAHL, and the Cochrane library. A bivariate meta-analysis was applied for pooling of diagnostic parameters. Six of the total 4719 articles met the inclusion criteria. The Berlin questionnaire (BQ, N = 604) and Epworth sleepiness scale (ESS, N = 420) were the most frequently used screening tools during pregnancy. The pooled prevalence of OSA during pregnancy was 26.7% (95%CI: 16.9%, 34.4%, I 2  = 83.15%). BQ performance was poor to fair with pooled sensitivity and specificity of 0.66 (95%CI: 0.45, 0.83; I 2  = 78.65%) and 0.62 (95%CI: 0.48, 0.75; I 2  = 81.55%), respectively. BQ performance was heterogeneous depending on type of reference test and pregnancy. Sensitivity increased if diagnosis was based on polysomnography (0.90), and respiratory disturbance index (0.90). However, sensitivity decreased if screening was performed in early pregnancy (≤20 weeks gestation: 0.47), and high-risk pregnancy (0.44). Performance of ESS was poor with pooled sensitivity and specificity of 0.44 (95%CI: 0.33, 0.56; I 2  = 32.8%) and 0.62 (95%CI: 0.48, 0.75; I 2  = 81.55%), respectively. In conclusion, BQ and ESS showed poor performance during pregnancy, hence a new OSA screening questionnaire is needed. Registration: PROSPERO registration CRD42015025848. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Diagnostic performance of FDG PET or PET/CT in prosthetic infection after arthroplasty: a meta-analysis.

    PubMed

    Jin, H; Yuan, L; Li, C; Kan, Y; Hao, R; Yang, J

    2014-03-01

    The purpose of this study was to systematically review and perform a meta-analysis of published data regarding the diagnostic performance of positron emission tomography (PET) or PET/computed tomography (PET/CT) in prosthetic infection after arthroplasty. A comprehensive computer literature search of studies published through May 31, 2012 regarding PET or PET/CT in patients suspicious of prosthetic infection was performed in PubMed/MEDLINE, Embase and Scopus databases. Pooled sensitivity and specificity of PET or PET/CT in patients suspicious of prosthetic infection on a per prosthesis-based analysis were calculated. The area under the receiver-operating characteristic (ROC) curve was calculated to measure the accuracy of PET or PET/CT in patients with suspicious of prosthetic infection. Fourteen studies comprising 838 prosthesis with suspicious of prosthetic infection after arthroplasty were included in this meta-analysis. The pooled sensitivity of PET or PET/CT in detecting prosthetic infection was 86% (95% confidence interval [CI] 82-90%) on a per prosthesis-based analysis. The pooled specificity of PET or PET/CT in detecting prosthetic infection was 86% (95% CI 83-89%) on a per prosthesis-based analysis. The area under the ROC curve was 0.93 on a per prosthesis-based analysis. In patients suspicious of prosthetic infection, FDG PET or PET/CT demonstrated high sensitivity and specificity. FDG PET or PET/CT are accurate methods in this setting. Nevertheless, possible sources of false positive results and influcing factors should kept in mind.

  14. Allergen Sensitization Pattern by Sex: A Cluster Analysis in Korea.

    PubMed

    Ohn, Jungyoon; Paik, Seung Hwan; Doh, Eun Jin; Park, Hyun-Sun; Yoon, Hyun-Sun; Cho, Soyun

    2017-12-01

    Allergens tend to sensitize simultaneously. Etiology of this phenomenon has been suggested to be allergen cross-reactivity or concurrent exposure. However, little is known about specific allergen sensitization patterns. To investigate the allergen sensitization characteristics according to gender. Multiple allergen simultaneous test (MAST) is widely used as a screening tool for detecting allergen sensitization in dermatologic clinics. We retrospectively reviewed the medical records of patients with MAST results between 2008 and 2014 in our Department of Dermatology. A cluster analysis was performed to elucidate the allergen-specific immunoglobulin (Ig)E cluster pattern. The results of MAST (39 allergen-specific IgEs) from 4,360 cases were analyzed. By cluster analysis, 39items were grouped into 8 clusters. Each cluster had characteristic features. When compared with female, the male group tended to be sensitized more frequently to all tested allergens, except for fungus allergens cluster. The cluster and comparative analysis results demonstrate that the allergen sensitization is clustered, manifesting allergen similarity or co-exposure. Only the fungus cluster allergens tend to sensitize female group more frequently than male group.

  15. Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines

    NASA Astrophysics Data System (ADS)

    Massa, Luca

    A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.

  16. Navigation Design and Analysis for the Orion Cislunar Exploration Missions

    NASA Technical Reports Server (NTRS)

    D'Souza, Christopher; Holt, Greg; Gay, Robert; Zanetti, Renato

    2014-01-01

    This paper details the design and analysis of the cislunar optical navigation system being proposed for the Orion Earth-Moon (EM) missions. In particular, it presents the mathematics of the navigation filter. It also presents the sensitivity analysis that has been performed to understand the performance of the proposed system, with particular attention paid to entry flight path angle constraints and the DELTA V performance

  17. Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.

    PubMed

    Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael

    2012-07-01

    To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Diagnostic performance of matrix-assisted laser desorption ionisation time-of-flight mass spectrometry in blood bacterial infections: a systematic review and meta-analysis.

    PubMed

    Scott, Jamie S; Sterling, Sarah A; To, Harrison; Seals, Samantha R; Jones, Alan E

    2016-07-01

    Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) has shown promise in decreasing time to identification of causative organisms compared to traditional methods; however, the utility of MALDI-TOF MS in a heterogeneous clinical setting is uncertain. To perform a systematic review on the operational performance of the Bruker MALDI-TOF MS system and evaluate published cut-off values compared to traditional blood cultures. A comprehensive literature search was performed. Studies were included if they performed direct MALDI-TOF MS analysis of blood culture specimens in human patients with suspected bacterial infections using the Bruker Biotyper software. Sensitivities and specificities of the combined studies were estimated using a hierarchical random effects linear model (REML) incorporating cut-off scores of ≥1.7 and ≥2.0. Fifty publications were identified, with 11 studies included after final review. The estimated sensitivity utilising a cut-off of ≥2.0 from the combined studies was 74.6% (95% CI = 67.9-89.3%), with an estimated specificity of 88.0% (95% CI = 74.8-94.7%). When assessing a cut-off of ≥1.7, the combined sensitivity increases to 92.8% (95% CI = 87.4-96.0%), but the estimated specificity decreased to 81.2% (95% CI = 61.9-96.6%). In this analysis, MALDI-TOF MS showed acceptable sensitivity and specificity in bacterial speciation with the current recommended cut-off point compared to blood cultures; however, lowering the cut-off point from ≥2.0 to ≥1.7 would increase the sensitivity of the test without significant detrimental effect on the specificity, which could improve clinician confidence in their results.

  19. Selection of optimal sensors for predicting performance of polymer electrolyte membrane fuel cell

    NASA Astrophysics Data System (ADS)

    Mao, Lei; Jackson, Lisa

    2016-10-01

    In this paper, sensor selection algorithms are investigated based on a sensitivity analysis, and the capability of optimal sensors in predicting PEM fuel cell performance is also studied using test data. The fuel cell model is developed for generating the sensitivity matrix relating sensor measurements and fuel cell health parameters. From the sensitivity matrix, two sensor selection approaches, including the largest gap method, and exhaustive brute force searching technique, are applied to find the optimal sensors providing reliable predictions. Based on the results, a sensor selection approach considering both sensor sensitivity and noise resistance is proposed to find the optimal sensor set with minimum size. Furthermore, the performance of the optimal sensor set is studied to predict fuel cell performance using test data from a PEM fuel cell system. Results demonstrate that with optimal sensors, the performance of PEM fuel cell can be predicted with good quality.

  20. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  1. Glaucoma progression detection: agreement, sensitivity, and specificity of expert visual field evaluation, event analysis, and trend analysis.

    PubMed

    Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier

    2013-01-01

    To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.

  2. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  3. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis.

    PubMed

    de Oliveira Azevedo, Christianne Terra; do Brasil, Pedro Emmanuel A A; Guida, Letícia; Lopes Moreira, Maria Elizabeth

    2016-01-01

    Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests' use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests' sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I(2)). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis.

  4. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis

    PubMed Central

    2016-01-01

    Introduction Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. Goal To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. Method A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. Results A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests’ use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests’ sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. Conclusion The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I2). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis. PMID:27055272

  5. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  6. The countermovement jump to monitor neuromuscular status: A meta-analysis.

    PubMed

    Claudino, João Gustavo; Cronin, John; Mezêncio, Bruno; McMaster, Daniel Travis; McGuigan, Michael; Tricoli, Valmor; Amadio, Alberto Carlos; Serrão, Julio Cerca

    2017-04-01

    The primary objective of this meta-analysis was to compare countermovement jump (CMJ) performance in studies that reported the highest value as opposed to average value for the purposes of monitoring neuromuscular status (i.e., fatigue and supercompensation). The secondary aim was to determine the sensitivity of the dependent variables. Systematic review with meta-analysis. The meta-analysis was conducted on the highest or average of a number of CMJ variables. Multiple literature searches were undertaken in Pubmed, Scopus, and Web of Science to identify articles utilizing CMJ to monitor training status. Effect sizes (ES) with 95% confidence interval (95% CI) were calculated using the mean and standard deviation of the pre- and post-testing data. The coefficient of variation (CV) with 95% CI was also calculated to assess the level of instability of each variable. Heterogeneity was assessed using a random-effects model. 151 articles were included providing a total of 531 ESs for the meta-analyses; 85.4% of articles used highest CMJ height, 13.2% used average and 1.3% used both when reporting changes in CMJ performance. Based on the meta-analysis, average CMJ height was more sensitive than highest CMJ height in detecting CMJ fatigue and supercompensation. Furthermore, other CMJ variables such as peak power, mean power, peak velocity, peak force, mean impulse, and power were sensitive in tracking the supercompensation effects of training. The average CMJ height was more sensitive than highest CMJ height in monitoring neuromuscular status; however, further investigation is needed to determine the sensitivity of other CMJ performance variables. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  7. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  8. Dynamic sensitivity analysis of biological systems

    PubMed Central

    Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang

    2008-01-01

    Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016

  9. Relationship between Grooming Performance and Motor and Cognitive Functions in Stroke Patients with Receiver Operating Characteristic Analysis.

    PubMed

    Fujita, Takaaki; Sato, Atsushi; Tsuchiya, Kenji; Ohashi, Takuro; Yamane, Kazuhiro; Yamamoto, Yuichi; Iokawa, Kazuaki; Ohira, Yoko; Otsuki, Koji; Tozato, Fusae

    2017-12-01

    This study aimed to elucidate the relationship between grooming performance of stroke patients and various motor and cognitive functions and to examine the cognitive and physical functional standards required for grooming independence. We retrospectively analyzed the data of 96 hospitalized patients with first stroke in a rehabilitation hospital ward. Logistic regression analysis and receiver operating characteristic curves were used to investigate the related cognitive and motor functions with grooming performance and to calculate the cutoff values for independence and supervision levels in grooming. For analysis between the independent and supervision-dependent groups, the only item with an area under the curve (AUC) of .9 or higher was the Berg Balance Scale, and the calculated cutoff value was 41/40 (sensitivity, 83.6%; specificity, 87.8%). For analysis between the independent-supervision and dependent groups, the items with an AUC of .9 or higher were the Simple Test for Evaluating Hand Function (STEF) on the nonaffected side, Vitality Index (VI), and FIM ® cognition. The cutoff values were 68/67 for the STEF (sensitivity, 100%; specificity, 72.2%), 9/8 points for the VI (sensitivity, 92.3%; specificity, 88.9%), and 23/22 points for FIM ® cognition (sensitivity, 91.0%; specificity, 88.9%). Our results suggest that upper-extremity functions on the nonaffected side, motivation, and cognitive functions are particularly important to achieve the supervision level and that balance is important to reach the independence level. The effective improvement of grooming performance is possible by performing therapeutic or compensatory intervention on functions that have not achieved these cutoff values. Copyright © 2017 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  10. Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.

    PubMed

    Villa, E; Aja, B; de la Fuente, L; Artal, E

    2016-01-01

    This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.

  11. Value of high-sensitivity C-reactive protein assays in predicting atrial fibrillation recurrence: a systematic review and meta-analysis.

    PubMed

    Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang

    2014-02-20

    We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Systematic review and meta-analysis. PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion.

  12. Performance of Spectrogram-Based Seizure Identification of Adult EEGs by Critical Care Nurses and Neurophysiologists.

    PubMed

    Amorim, Edilberto; Williamson, Craig A; Moura, Lidia M V R; Shafi, Mouhsin M; Gaspard, Nicolas; Rosenthal, Eric S; Guanci, Mary M; Rajajee, Venkatakrishna; Westover, M Brandon

    2017-07-01

    Continuous EEG screening using spectrograms or compressed spectral arrays (CSAs) by neurophysiologists has shorter review times with minimal loss of sensitivity for seizure detection when compared with visual analysis of raw EEG. Limited data are available on the performance characteristics of CSA-based seizure detection by neurocritical care nurses. This is a prospective cross-sectional study that was conducted in two academic neurocritical care units and involved 33 neurointensive care unit nurses and four neurophysiologists. All nurses underwent a brief training session before testing. Forty two-hour CSA segments of continuous EEG were reviewed and rated for the presence of seizures. Two experienced clinical neurophysiologists masked to the CSA data performed conventional visual analysis of the raw EEG and served as the gold standard. The overall accuracy was 55.7% among nurses and 67.5% among neurophysiologists. Nurse seizure detection sensitivity was 73.8%, and the false-positive rate was 1-per-3.2 hours. Sensitivity and false-alarm rate for the neurophysiologists was 66.3% and 1-per-6.4 hours, respectively. Interrater agreement for seizure screening was fair for nurses (Gwet AC1 statistic: 43.4%) and neurophysiologists (AC1: 46.3%). Training nurses to perform seizure screening utilizing continuous EEG CSA displays is feasible and associated with moderate sensitivity. Nurses and neurophysiologists had comparable sensitivities, but nurses had a higher false-positive rate. Further work is needed to improve sensitivity and reduce false-alarm rates.

  13. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  14. A Sensitivity Analysis of a Map of Habitat Quality for the California Spotted Owl (Strix occidentalis occidentalis) in southern California

    Treesearch

    Ellen M. Hines; Janet Franklin

    1997-01-01

    Using a Geographic Information System (GIS), a sensitivity analysis was performed on estimated mapping errors in vegetation type, forest canopy cover percentage, and tree crown size to determine the possible effects error in these data might have on delineating suitable habitat for the California Spotted Owl (Strix occidentalis occidentalis) in...

  15. Evaluation and construction of diagnostic criteria for inclusion body myositis

    PubMed Central

    Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee

    2014-01-01

    Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859

  16. Temporal diagnostic analysis of the SWAT model to detect dominant periods of poor model performance

    NASA Astrophysics Data System (ADS)

    Guse, Björn; Reusser, Dominik E.; Fohrer, Nicola

    2013-04-01

    Hydrological models generally include thresholds and non-linearities, such as snow-rain-temperature thresholds, non-linear reservoirs, infiltration thresholds and the like. When relating observed variables to modelling results, formal methods often calculate performance metrics over long periods, reporting model performance with only few numbers. Such approaches are not well suited to compare dominating processes between reality and model and to better understand when thresholds and non-linearities are driving model results. We present a combination of two temporally resolved model diagnostic tools to answer when a model is performing (not so) well and what the dominant processes are during these periods. We look at the temporal dynamics of parameter sensitivities and model performance to answer this question. For this, the eco-hydrological SWAT model is applied in the Treene lowland catchment in Northern Germany. As a first step, temporal dynamics of parameter sensitivities are analyzed using the Fourier Amplitude Sensitivity test (FAST). The sensitivities of the eight model parameters investigated show strong temporal variations. High sensitivities were detected for two groundwater (GW_DELAY, ALPHA_BF) and one evaporation parameters (ESCO) most of the time. The periods of high parameter sensitivity can be related to different phases of the hydrograph with dominances of the groundwater parameters in the recession phases and of ESCO in baseflow and resaturation periods. Surface runoff parameters show high parameter sensitivities in phases of a precipitation event in combination with high soil water contents. The dominant parameters give indication for the controlling processes during a given period for the hydrological catchment. The second step included the temporal analysis of model performance. For each time step, model performance was characterized with a "finger print" consisting of a large set of performance measures. These finger prints were clustered into four reoccurring patterns of typical model performance, which can be related to different phases of the hydrograph. Overall, the baseflow cluster has the lowest performance. By combining the periods with poor model performance with the dominant model components during these phases, the groundwater module was detected as the model part with the highest potential for model improvements. The detection of dominant processes in periods of poor model performance enhances the understanding of the SWAT model. Based on this, concepts how to improve the SWAT model structure for the application in German lowland catchment are derived.

  17. Integrated Modeling Activities for the James Webb Space Telescope (JWST): Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Parrish, Keith; Howard, Joseph M.; Mosier, Gary E.; McGinnis, Mark; Bluth, Marcel; Kim, Kevin; Ha, Hong Q.

    2004-01-01

    This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal- optical, often referred to as "STOP", analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. The paper begins an overview of multi-disciplinary engineering analysis, or integrated modeling, which is a critical element of the JWST mission. The STOP analysis process is then described. This process consists of the following steps: thermal analysis, structural analysis, and optical analysis. Temperatures predicted using geometric and thermal math models are mapped to the structural finite element model in order to predict thermally-induced deformations. Motions and deformations at optical surfaces are input to optical models and optical performance is predicted using either an optical ray trace or WFE estimation techniques based on prior ray traces or first order optics. Following the discussion of the analysis process, results based on models representing the design at the time of the System Requirements Review. In addition to baseline performance predictions, sensitivity studies are performed to assess modeling uncertainties. Of particular interest is the sensitivity of optical performance to uncertainties in temperature predictions and variations in metal properties. The paper concludes with a discussion of modeling uncertainty as it pertains to STOP analysis.

  18. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  19. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  20. OECD/NEA expert group on uncertainty analysis for criticality safety assessment: Results of benchmark on sensitivity calculation (phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, T.; Laville, C.; Dyrda, J.

    2012-07-01

    The sensitivities of the k{sub eff} eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplificationsmore » impact the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods. (authors)« less

  1. Assessment of energy and economic performance of office building models: a case study

    NASA Astrophysics Data System (ADS)

    Song, X. Y.; Ye, C. T.; Li, H. S.; Wang, X. L.; Ma, W. B.

    2016-08-01

    Energy consumption of building accounts for more than 37.3% of total energy consumption while the proportion of energy-saving buildings is just 5% in China. In this paper, in order to save potential energy, an office building in Southern China was selected as a test example for energy consumption characteristics. The base building model was developed by TRNSYS software and validated against the recorded data from the field work in six days out of August-September in 2013. Sensitivity analysis was conducted for energy performance of building envelope retrofitting; five envelope parameters were analyzed for assessing the thermal responses. Results indicated that the key sensitivity factors were obtained for the heat-transfer coefficient of exterior walls (U-wall), infiltration rate and shading coefficient (SC), of which the sum sensitivity factor was about 89.32%. In addition, the results were evaluated in terms of energy and economic analysis. The analysis of sensitivity validated against some important results of previous studies. On the other hand, the cost-effective method improved the efficiency of investment management in building energy.

  2. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  3. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  4. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  5. Uncertainty quantification and sensitivity analysis with CASL Core Simulator VERA-CS

    DOE PAGES

    Brown, C. S.; Zhang, Hongbin

    2016-05-24

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  6. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  7. Metrics for evaluating performance and uncertainty of Bayesian network models

    Treesearch

    Bruce G. Marcot

    2012-01-01

    This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...

  8. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  9. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  10. The diagnostic performance of shear wave elastography for malignant cervical lymph nodes: A systematic review and meta-analysis.

    PubMed

    Suh, Chong Hyun; Choi, Young Jun; Baek, Jung Hwan; Lee, Jeong Hyun

    2017-01-01

    To evaluate the diagnostic performance of shear wave elastography for malignant cervical lymph nodes. We searched the Ovid-MEDLINE and EMBASE databases for published studies regarding the use of shear wave elastography for diagnosing malignant cervical lymph nodes. The diagnostic performance of shear wave elastography was assessed using bivariate modelling and hierarchical summary receiver operating characteristic modelling. Meta-regression analysis and subgroup analysis according to acoustic radiation force impulse imaging (ARFI) and Supersonic shear imaging (SSI) were also performed. Eight eligible studies which included a total sample size of 481 patients with 647 cervical lymph nodes, were included. Shear wave elastography showed a summary sensitivity of 81 % (95 % CI: 72-88 %) and specificity of 85 % (95 % CI: 70-93 %). The results of meta-regression analysis revealed that the prevalence of malignant lymph nodes was a significant factor affecting study heterogeneity (p < .01). According to the subgroup analysis, the summary estimates of the sensitivity and specificity did not differ between ARFI and SSI (p = .93). Shear wave elastography is an acceptable imaging modality for diagnosing malignant cervical lymph nodes. We believe that both ARFI and SSI may have a complementary role for diagnosing malignant cervical lymph nodes. • Shear wave elastography is acceptable modality for diagnosing malignant cervical lymph nodes. • Shear wave elastography demonstrated summary sensitivity of 81 % and specificity of 85 %. • ARFI and SSI have complementary roles for diagnosing malignant cervical lymph nodes.

  11. ADC as a useful diagnostic tool for differentiating benign and malignant vertebral bone marrow lesions and compression fractures: a systematic review and meta-analysis.

    PubMed

    Suh, Chong Hyun; Yun, Seong Jong; Jin, Wook; Lee, Sun Hwa; Park, So Young; Ryu, Chang-Woo

    2018-07-01

    To assess the sensitivity and specificity of quantitative assessment of the apparent diffusion coefficient (ADC) for differentiating benign and malignant vertebral bone marrow lesions (BMLs) and compression fractures (CFs) METHODS: An electronic literature search of MEDLINE and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver operating characteristic modelling were performed to evaluate the diagnostic performance of ADC for differentiating vertebral BMLs. Subgroup analysis was performed for differentiating benign and malignant vertebral CFs. Meta-regression analyses according to subject, study and diffusion-weighted imaging (DWI) characteristics were performed. Twelve eligible studies (748 lesions, 661 patients) were included. The ADC exhibited a pooled sensitivity of 0.89 (95% confidence interval [CI] 0.80-0.94) and a pooled specificity of 0.87 (95% CI 0.78-0.93) for differentiating benign and malignant vertebral BMLs. In addition, the pooled sensitivity and specificity for differentiating benign and malignant CFs were 0.92 (95% CI 0.82-0.97) and 0.91 (95% CI 0.87-0.94), respectively. In the meta-regression analysis, the DWI slice thickness was a significant factor affecting heterogeneity (p < 0.01); thinner slice thickness (< 5 mm) showed higher specificity (95%) than thicker slice thickness (81%). Quantitative assessment of ADC is a useful diagnostic tool for differentiating benign and malignant vertebral BMLs and CFs. • Quantitative assessment of ADC is useful in differentiating vertebral BMLs. • Quantitative ADC assessment for BMLs had sensitivity of 89%, specificity of 87%. • Quantitative ADC assessment for CFs had sensitivity of 92%, specificity of 91%. • The specificity is highest (95%) with thinner (< 5 mm) DWI slice thickness.

  12. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  13. Modeling and Analysis of Actinide Diffusion Behavior in Irradiated Metal Fuel

    NASA Astrophysics Data System (ADS)

    Edelmann, Paul G.

    There have been numerous attempts to model fast reactor fuel behavior in the last 40 years. The US currently does not have a fully reliable tool to simulate the behavior of metal fuels in fast reactors. The experimental database necessary to validate the codes is also very limited. The DOE-sponsored Advanced Fuels Campaign (AFC) has performed various experiments that are ready for analysis. Current metal fuel performance codes are either not available to the AFC or have limitations and deficiencies in predicting AFC fuel performance. A modified version of a new fuel performance code, FEAST-Metal , was employed in this investigation with useful results. This work explores the modeling and analysis of AFC metallic fuels using FEAST-Metal, particularly in the area of constituent actinide diffusion behavior. The FEAST-Metal code calculations for this work were conducted at Los Alamos National Laboratory (LANL) in support of on-going activities related to sensitivity analysis of fuel performance codes. A sensitivity analysis of FEAST-Metal was completed to identify important macroscopic parameters of interest to modeling and simulation of metallic fuel performance. A modification was made to the FEAST-Metal constituent redistribution model to enable accommodation of newer AFC metal fuel compositions with verified results. Applicability of this modified model for sodium fast reactor metal fuel design is demonstrated.

  14. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  15. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake.

    PubMed

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-09-02

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.

  16. Critical processes and parameters in the development of accident tolerant fuels drop-in capsule irradiation tests

    DOE PAGES

    Barrett, K. E.; Ellis, K. D.; Glass, C. R.; ...

    2015-12-01

    The goal of the Accident Tolerant Fuel (ATF) program is to develop the next generation of Light Water Reactor (LWR) fuels with improved performance, reliability, and safety characteristics during normal operations and accident conditions and with reduced waste generation. An irradiation test series has been defined to assess the performance of proposed ATF concepts under normal LWR operating conditions. The Phase I ATF irradiation test series is planned to be performed as a series of drop-in capsule tests to be irradiated in the Advanced Test Reactor (ATR) operated by the Idaho National Laboratory (INL). Design, analysis, and fabrication processes formore » ATR drop-in capsule experiment preparation are presented in this paper to demonstrate the importance of special design considerations, parameter sensitivity analysis, and precise fabrication and inspection techniques for figure innovative materials used in ATF experiment assemblies. A Taylor Series Method sensitivity analysis approach was used to identify the most critical variables in cladding and rodlet stress, temperature, and pressure calculations for design analyses. The results showed that internal rodlet pressure calculations are most sensitive to the fission gas release rate uncertainty while temperature calculations are most sensitive to cladding I.D. and O.D. dimensional uncertainty. The analysis showed that stress calculations are most sensitive to rodlet internal pressure uncertainties, however the results also indicated that the inside radius, outside radius, and internal pressure were all magnified as they propagate through the stress equation. This study demonstrates the importance for ATF concept development teams to provide the fabricators as much information as possible about the material properties and behavior observed in prototype testing, mock-up fabrication and assembly, and chemical and mechanical testing of the materials that may have been performed in the concept development phase. Special handling, machining, welding, and inspection of materials, if known, should also be communicated to the experiment fabrication and inspection team.« less

  17. Value of high-sensitivity C-reactive protein assays in predicting atrial fibrillation recurrence: a systematic review and meta-analysis

    PubMed Central

    Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang

    2014-01-01

    Objectives We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Design Systematic review and meta-analysis. Data sources PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. Study eligibility criteria This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. Primary and secondary outcome measures We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. Results We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). Conclusions hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion. PMID:24556243

  18. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis.

    PubMed

    Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J

    2016-01-01

    (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.

  19. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis

    PubMed Central

    Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.

    2016-01-01

    Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476

  20. NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    2009-02-28

    The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less

  1. Effect of dye extracting solvents and sensitization time on photovoltaic performance of natural dye sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Hossain, Md. Khalid; Pervez, M. Firoz; Mia, M. N. H.; Mortuza, A. A.; Rahaman, M. S.; Karim, M. R.; Islam, Jahid M. M.; Ahmed, Farid; Khan, Mubarak A.

    In this study, natural dye sensitizer based solar cells were successfully fabricated and photovoltaic performance was measured. Sensitizer (turmeric) sources, dye extraction process, and photoanode sensitization time of the fabricated cells were analyzed and optimized. Dry turmeric, verdant turmeric, and powder turmeric were used as dye sources. Five distinct types of solvents were used for extraction of natural dye from turmeric. Dyes were characterized by UV-Vis spectrophotometric analysis. The extracted turmeric dye was used as a sensitizer in the dye sensitized solar cell's (DSSC) photoanode assembly. Nano-crystalline TiO2 was used as a film coating semiconductor material of the photoanode. TiO2 films on ITO glass substrate were prepared by simple doctor blade technique. The influence of the different parameters VOC, JSC, power density, FF, and η% on the photovoltaic characteristics of DSSCs was analyzed. The best energy conversion performance was obtained for 2 h adsorption time of dye on TiO2 nano-porous surface with ethanol extracted dye from dry turmeric.

  2. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  3. Dynamic Modeling of the Human Coagulation Cascade Using Reduced Order Effective Kinetic Models (Open Access)

    DTIC Science & Technology

    2015-03-16

    shaded region around each total sensitivity value was the maximum uncertainty in that value estimated by the Sobol method. 2.4. Global Sensitivity...Performance We conducted a global sensitivity analysis, using the variance-based method of Sobol , to estimate which parameters controlled the...Hunter, J.D. Matplotlib: A 2D Graphics Environment. Comput. Sci. Eng. 2007, 9, 90–95. 69. Sobol , I. Global sensitivity indices for nonlinear

  4. Sensitivity analysis in a Lassa fever deterministic mathematical model

    NASA Astrophysics Data System (ADS)

    Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman

    2015-05-01

    Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.

  5. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  6. Diagnostic value of 18F-FDG-PET/CT for the evaluation of solitary pulmonary nodules: a systematic review and meta-analysis.

    PubMed

    Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian

    2017-01-01

    To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.

  7. Sensitivity and Uncertainty Analysis for Streamflow Prediction Using Different Objective Functions and Optimization Algorithms: San Joaquin California

    NASA Astrophysics Data System (ADS)

    Paul, M.; Negahban-Azar, M.

    2017-12-01

    The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).

  8. Performance of pfHRP2 versus pLDH antigen rapid diagnostic tests for the detection of Plasmodium falciparum: a systematic review and meta-analysis.

    PubMed

    Li, Bo; Sun, Zhiqiang; Li, Xiaohan; Li, Xiaoxi; Wang, Han; Chen, Weijiao; Chen, Peng; Qiao, Mengran; Mao, Yuanli

    2017-04-01

    There have been many inconsistent reports about the performance of histidine-rich protein 2 (HRP2) and lactate dehydrogenase (LDH) antigens as rapid diagnostic tests (RDTs) for the diagnosis of past Plasmodium falciparum infections. This meta-analysis was performed to determine the performance of pfHRP2 versus pLDH antigen RDTs in the detection of P. falciparum . After a systematic review of related studies, Meta-DiSc 1.4 software was used to calculate the pooled sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). Forest plots and summary receiver operating characteristic curve (SROC) analysis were used to summarize the overall test performance. Fourteen studies which met the inclusion criteria were included in the meta-analysis. The summary performances for pfHRP2- and pLDH-based tests in the diagnosis of P. falciparum infections were as follows: pooled sensitivity, 96.3% (95.8-96.7%) vs. 82.6% (81.7-83.5%); specificity, 86.1% (85.3-86.8%) vs. 95.9% (95.4-96.3%); diagnostic odds ratio (DOR), 243.31 (97.679-606.08) vs. 230.59 (114.98-462.42); and area under ROCs, 0.9822 versus 0.9849 (all p < 0.001). The two RDTs performed satisfactorily for the diagnosis of P. falciparum , but the pLDH tests had higher specificity, whereas the pfHRP2 tests had better sensitivity. The pfHRP2 tests had slightly greater accuracy compared to the pLDH tests. A combination of both antigens might be a more reliable approach for the diagnosis of malaria.

  9. Accelerated Sensitivity Analysis in High-Dimensional Stochastic Reaction Networks

    PubMed Central

    Arampatzis, Georgios; Katsoulakis, Markos A.; Pantazis, Yannis

    2015-01-01

    Existing sensitivity analysis approaches are not able to handle efficiently stochastic reaction networks with a large number of parameters and species, which are typical in the modeling and simulation of complex biochemical phenomena. In this paper, a two-step strategy for parametric sensitivity analysis for such systems is proposed, exploiting advantages and synergies between two recently proposed sensitivity analysis methodologies for stochastic dynamics. The first method performs sensitivity analysis of the stochastic dynamics by means of the Fisher Information Matrix on the underlying distribution of the trajectories; the second method is a reduced-variance, finite-difference, gradient-type sensitivity approach relying on stochastic coupling techniques for variance reduction. Here we demonstrate that these two methods can be combined and deployed together by means of a new sensitivity bound which incorporates the variance of the quantity of interest as well as the Fisher Information Matrix estimated from the first method. The first step of the proposed strategy labels sensitivities using the bound and screens out the insensitive parameters in a controlled manner. In the second step of the proposed strategy, a finite-difference method is applied only for the sensitivity estimation of the (potentially) sensitive parameters that have not been screened out in the first step. Results on an epidermal growth factor network with fifty parameters and on a protein homeostasis with eighty parameters demonstrate that the proposed strategy is able to quickly discover and discard the insensitive parameters and in the remaining potentially sensitive parameters it accurately estimates the sensitivities. The new sensitivity strategy can be several times faster than current state-of-the-art approaches that test all parameters, especially in “sloppy” systems. In particular, the computational acceleration is quantified by the ratio between the total number of parameters over the number of the sensitive parameters. PMID:26161544

  10. Development of a standardized battery of performance tests for the assessment of noise stress effects

    NASA Technical Reports Server (NTRS)

    Theologus, G. C.; Wheaton, G. R.; Mirabella, A.; Brahlek, R. E.

    1973-01-01

    A set of 36 relatively independent categories of human performance were identified. These categories encompass human performance in the cognitive, perceptual, and psychomotor areas, and include diagnostic measures and sensitive performance metrics. Then a prototype standardized test battery was constructed, and research was conducted to obtain information on the sensitivity of the tests to stress, the sensitivity of selected categories of performance degradation, the time course of stress effects on each of the selected tests, and the learning curves associated with each test. A research project utilizing a three factor partially repeated analysis of covariance design was conducted in which 60 male subjects were exposed to variations in noise level and quality during performance testing. Effects of randomly intermittent noise on performance of the reaction time tests were observed, but most of the other performance tests showed consistent stability. The results of 14 analyses of covariance of the data taken from the performance of the 60 subjects on the prototype standardized test battery provided information which will enable the final development and test of a standardized test battery and the associated development of differential sensitivity metrics and diagnostic classificatory system.

  11. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  12. BAYESIAN ANALYSIS TO EVALUATE TESTS FOR THE DETECTION OF MYCOBACTERIUM BOVIS INFECTION IN FREE-RANGING WILD BISON (BISON BISON ATHABASCAE) IN THE ABSENCE OF A GOLD STANDARD.

    PubMed

    Chapinal, Núria; Schumaker, Brant A; Joly, Damien O; Elkin, Brett T; Stephen, Craig

    2015-07-01

    We estimated the sensitivity and specificity of the caudal-fold skin test (CFT), the fluorescent polarization assay (FPA), and the rapid lateral-flow test (RT) for the detection of Mycobacterium bovis in free-ranging wild wood bison (Bison bison athabascae), in the absence of a gold standard, by using Bayesian analysis, and then used those estimates to forecast the performance of a pairwise combination of tests in parallel. In 1998-99, 212 wood bison from Wood Buffalo National Park (Canada) were tested for M. bovis infection using CFT and two serologic tests (FPA and RT). The sensitivity and specificity of each test were estimated using a three-test, one-population, Bayesian model allowing for conditional dependence between FPA and RT. The sensitivity and specificity of the combination of CFT and each serologic test in parallel were calculated assuming conditional independence. The test performance estimates were influenced by the prior values chosen. However, the rank of tests and combinations of tests based on those estimates remained constant. The CFT was the most sensitive test and the FPA was the least sensitive, whereas RT was the most specific test and CFT was the least specific. In conclusion, given the fact that gold standards for the detection of M. bovis are imperfect and difficult to obtain in the field, Bayesian analysis holds promise as a tool to rank tests and combinations of tests based on their performance. Combining a skin test with an animal-side serologic test, such as RT, increases sensitivity in the detection of M. bovis and is a good approach to enhance disease eradication or control in wild bison.

  13. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  14. Performance of the ASAS classification criteria for axial and peripheral spondyloarthritis: a systematic literature review and meta-analysis.

    PubMed

    Sepriano, Alexandre; Rubio, Roxana; Ramiro, Sofia; Landewé, Robert; van der Heijde, Désirée

    2017-05-01

    To summarise the evidence on the performance of the Assessment of SpondyloArthritis international Society (ASAS) classification criteria for axial spondyloarthritis (axSpA) (also imaging and clinical arm separately), peripheral (p)SpA and the entire set, when tested against the rheumatologist's diagnosis ('reference standard'). A systematic literature review was performed to identify eligible studies. Raw data on SpA diagnosis and classification were extracted or, if necessary, obtained from the authors of the selected publications. A meta-analysis was performed to obtain pooled estimates for sensitivity, specificity, positive and negative likelihood ratios, by fitting random effects models. Nine papers fulfilled the inclusion criteria (N=5739 patients). The entire set of the ASAS SpA criteria yielded a high pooled sensitivity (73%) and specificity (88%). Similarly, good results were found for the axSpA criteria (sensitivity: 82%; specificity: 88%). Splitting the axSpA criteria in 'imaging arm only' and 'clinical arm only' resulted in much lower sensitivity (30% and 23% respectively), but very high specificity was retained (97% and 94% respectively). The pSpA criteria were less often tested than the axSpA criteria and showed a similarly high pooled specificity (87%) but lower sensitivity (63%). Accumulated evidence from studies with more than 5500 patients confirms the good performance of the various ASAS SpA criteria as tested against the rheumatologist's diagnosis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Material and morphology parameter sensitivity analysis in particulate composite materials

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyu; Oskay, Caglar

    2017-12-01

    This manuscript presents a novel parameter sensitivity analysis framework for damage and failure modeling of particulate composite materials subjected to dynamic loading. The proposed framework employs global sensitivity analysis to study the variance in the failure response as a function of model parameters. In view of the computational complexity of performing thousands of detailed microstructural simulations to characterize sensitivities, Gaussian process (GP) surrogate modeling is incorporated into the framework. In order to capture the discontinuity in response surfaces, the GP models are integrated with a support vector machine classification algorithm that identifies the discontinuities within response surfaces. The proposed framework is employed to quantify variability and sensitivities in the failure response of polymer bonded particulate energetic materials under dynamic loads to material properties and morphological parameters that define the material microstructure. Particular emphasis is placed on the identification of sensitivity to interfaces between the polymer binder and the energetic particles. The proposed framework has been demonstrated to identify the most consequential material and morphological parameters under vibrational and impact loads.

  16. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  17. Design sensitivity analysis of boundary element substructures

    NASA Technical Reports Server (NTRS)

    Kane, James H.; Saigal, Sunil; Gallagher, Richard H.

    1989-01-01

    The ability to reduce or condense a three-dimensional model exactly, and then iterate on this reduced size model representing the parts of the design that are allowed to change in an optimization loop is discussed. The discussion presents the results obtained from an ongoing research effort to exploit the concept of substructuring within the structural shape optimization context using a Boundary Element Analysis (BEA) formulation. The first part contains a formulation for the exact condensation of portions of the overall boundary element model designated as substructures. The use of reduced boundary element models in shape optimization requires that structural sensitivity analysis can be performed. A reduced sensitivity analysis formulation is then presented that allows for the calculation of structural response sensitivities of both the substructured (reduced) and unsubstructured parts of the model. It is shown that this approach produces significant computational economy in the design sensitivity analysis and reanalysis process by facilitating the block triangular factorization and forward reduction and backward substitution of smaller matrices. The implementatior of this formulation is discussed and timings and accuracies of representative test cases presented.

  18. Bayesian Sensitivity Analysis of Statistical Models with Missing Data

    PubMed Central

    ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG

    2013-01-01

    Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718

  19. Sensitivity analysis on the effect of key parameters on the performance of parabolic trough solar collectors

    NASA Astrophysics Data System (ADS)

    Muhlen, Luis S. W.; Najafi, Behzad; Rinaldi, Fabio; Marchesi, Renzo

    2014-04-01

    Solar troughs are amongst the most commonly used technologies for collecting solar thermal energy and any attempt to increase the performance of these systems is welcomed. In the present study a parabolic solar trough is simulated using a one dimensional finite element model in which the energy balances for the fluid, the absorber and the envelope in each element are performed. The developed model is then validated using the available experimental data . A sensitivity analysis is performed in the next step in order to study the effect of changing the type of the working fluid and the corresponding Reynolds number on the overall performance of the system. The potential improvement due to the addition of a shield on the upper half of the annulus and enhancing the convection coefficient of the heat transfer fluid is also studied.

  20. Maternal sensitivity: a concept analysis.

    PubMed

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  1. Instrument performance of a radon measuring system with the alpha-track detection technique.

    PubMed

    Tokonami, S; Zhuo, W; Ryuo, H; Yonehara, H; Yamada, Y; Shimo, M

    2003-01-01

    An instrument performance test has been carried out for a radon measuring system made in Hungary. The system measures radon using the alpha-track detection technique. It consists of three parts: the passive detector, the etching unit and the evaluation unit. A CR-39 detector is used as the radiation detector. Alpha-track reading and data analysis are carried out after chemical etching. The following subjects were examined in the present study: (1) radon sensitivity, (2) performance of etching and evaluation processes and (3) thoron sensitivity. The radon sensitivity of 6.9 x 10(-4) mm(-2) (Bq m(-3) d)(-1) was acceptable for practical application. The thoron sensitivity was estimated to be as low as 3.3 x 10(-5) mm(-2) (Bq m(-3) d)(-1) from the experimental study.

  2. Parameter Estimation and Sensitivity Analysis of an Urban Surface Energy Balance Parameterization at a Tropical Suburban Site

    NASA Astrophysics Data System (ADS)

    Harshan, S.; Roth, M.; Velasco, E.

    2014-12-01

    Forecasting of the urban weather and climate is of great importance as our cities become more populated and considering the combined effects of global warming and local land use changes which make urban inhabitants more vulnerable to e.g. heat waves and flash floods. In meso/global scale models, urban parameterization schemes are used to represent the urban effects. However, these schemes require a large set of input parameters related to urban morphological and thermal properties. Obtaining all these parameters through direct measurements are usually not feasible. A number of studies have reported on parameter estimation and sensitivity analysis to adjust and determine the most influential parameters for land surface schemes in non-urban areas. Similar work for urban areas is scarce, in particular studies on urban parameterization schemes in tropical cities have so far not been reported. In order to address above issues, the town energy balance (TEB) urban parameterization scheme (part of the SURFEX land surface modeling system) was subjected to a sensitivity and optimization/parameter estimation experiment at a suburban site in, tropical Singapore. The sensitivity analysis was carried out as a screening test to identify the most sensitive or influential parameters. Thereafter, an optimization/parameter estimation experiment was performed to calibrate the input parameter. The sensitivity experiment was based on the "improved Sobol's global variance decomposition method" . The analysis showed that parameters related to road, roof and soil moisture have significant influence on the performance of the model. The optimization/parameter estimation experiment was performed using the AMALGM (a multi-algorithm genetically adaptive multi-objective method) evolutionary algorithm. The experiment showed a remarkable improvement compared to the simulations using the default parameter set. The calibrated parameters from this optimization experiment can be used for further model validation studies to identify inherent deficiencies in model physics.

  3. Benchmark On Sensitivity Calculation (Phase III)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ivanova, Tatiana; Laville, Cedric; Dyrda, James

    2012-01-01

    The sensitivities of the keff eigenvalue to neutron cross sections have become commonly used in similarity studies and as part of the validation algorithm for criticality safety assessments. To test calculations of the sensitivity coefficients, a benchmark study (Phase III) has been established by the OECD-NEA/WPNCS/EG UACSA (Expert Group on Uncertainty Analysis for Criticality Safety Assessment). This paper presents some sensitivity results generated by the benchmark participants using various computational tools based upon different computational methods: SCALE/TSUNAMI-3D and -1D, MONK, APOLLO2-MORET 5, DRAGON-SUSD3D and MMKKENO. The study demonstrates the performance of the tools. It also illustrates how model simplifications impactmore » the sensitivity results and demonstrates the importance of 'implicit' (self-shielding) sensitivities. This work has been a useful step towards verification of the existing and developed sensitivity analysis methods.« less

  4. Gender Differences in Performance of Script Analysis by Older Adults

    ERIC Educational Resources Information Center

    Helmes, E.; Bush, J. D.; Pike, D. L.; Drake, D. G.

    2006-01-01

    Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical…

  5. Longitudinal study of factors affecting taste sense decline in old-old individuals.

    PubMed

    Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y

    2017-01-01

    The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.

  6. Sensitivity analysis of TRX-2 lattice parameters with emphasis on epithermal /sup 238/U capture. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; deSaussure, G.; Weisbin, C.R.

    1977-03-01

    The main purpose of the study is the determination of the sensitivity of TRX-2 thermal lattice performance parameters to nuclear cross section data, particularly the epithermal resonance capture cross section of /sup 238/U. An energy-dependent sensitivity profile was generated for each of the performance parameters, to the most important cross sections of the various isotopes in the lattice. Uncertainties in the calculated values of the performance parameters due to estimated uncertainties in the basic nuclear data, deduced in this study, were shown to be small compared to the uncertainties in the measured values of the performance parameter and compared tomore » differences among calculations based upon the same data but with different methodologies.« less

  7. Sensitivity Analysis of Digital I&C Modules in Protection and Safety Systems

    NASA Astrophysics Data System (ADS)

    Khalil Ur, Rahman; Zubair, M.; Heo, G.

    2013-12-01

    This research is performed to examine the sensitivity of digital Instrumentation and Control (I&C) components and modules used in regulating and protection systems architectures of nuclear industry. Fault Tree Analysis (FTA) was performed for four configurations of RPS channel architecture. The channel unavailability has been calculated by using AIMS-PSA, which comes out 4.517E-03, 2.551E-03, 2.246E-03 and 2.7613-04 for architecture configuration I, II, III and IV respectively. It is observed that unavailability decreases by 43.5 % & 50.4% by inserting partial redundancy whereas maximum reduction of 93.9 % in unavailability happens when double redundancy is inserted in architecture. Coincidence module output failure and bi-stable output failures are identified as sensitive failures by Risk Reduction Worth (RRW) and Fussell-Vesely (FV) importance. RRW highlights that risk from coincidence processor output failure can reduced by 48.83 folds and FV indicates that BP output is sensitive by 0.9796 (on a scale of 1).

  8. Sensitivity analysis to assess the influence of the inertial properties of railway vehicle bodies on the vehicle's dynamic behaviour

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Maroto, Joaquin; Rodriguez, Pablo

    2013-02-01

    A sensitivity analysis has been performed to assess the influence of the inertial properties of railway vehicles on their dynamic behaviour. To do this, 216 dynamic simulations were performed modifying, one at a time, the masses, moments of inertia and heights of the centre of gravity of the carbody, the bogie and the wheelset. Three values were assigned to each parameter, corresponding to the percentiles 10, 50 and 90 of a data set stored in a database of railway vehicles. After processing the results of these simulations, the analysed parameters were sorted by increasing influence. It was also found which of these parameters could be estimated with a lesser degree of accuracy for future simulations without appreciably affecting the simulation results. In general terms, it was concluded that the most sensitive inertial properties are the mass and the vertical moment of inertia, and the least sensitive ones the longitudinal and lateral moments of inertia.

  9. Performance analysis of higher mode spoof surface plasmon polariton for terahertz sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Haizi; Tu, Wanli; Zhong, Shuncong, E-mail: zhongshuncong@hotmail.com

    2015-04-07

    We investigated the spoof surface plasmon polaritons (SSPPs) on 1D grooved metal surface for terahertz sensing of refractive index of the filling analyte through a prism-coupling attenuated total reflection setup. From the dispersion relation analysis and the finite element method-based simulation, we revealed that the dispersion curve of SSPP got suppressed as the filling refractive index increased, which cause the coupling resonance frequency redshifting in the reflection spectrum. The simulated results for testing various refractive indexes demonstrated that the incident angle of terahertz radiation has a great effect on the performance of sensing. Smaller incident angle will result in amore » higher sensitive sensing with a narrower detection range. In the meanwhile, the higher order mode SSPP-based sensing has a higher sensitivity with a narrower detection range. The maximum sensitivity is 2.57 THz/RIU for the second-order mode sensing at 45° internal incident angle. The proposed SSPP-based method has great potential for high sensitive terahertz sensing.« less

  10. Performance of the 2015 American College of Rheumatology/European League Against Rheumatism gout classification criteria in Thai patients.

    PubMed

    Louthrenoo, Worawit; Jatuworapruk, Kanon; Lhakum, Panomkorn; Pattamapaspong, Nuttaya

    2017-05-01

    To evaluate the sensitivity and specificity of the 2015 American College of Rheumatology/European League Against Rheumatism (ACR/EULAR) gout classification criteria in Thai patients presenting with acute arthritis in a real-life setting. Data were analyzed on consecutive patients presenting with arthritis of less than 2 weeks duration. Sensitivity and specificity were calculated by using the presence of monosodium urate (MSU) crystals in the synovial fluid or tissue aspirate as gold standard for gout diagnosis. Subgroup analysis was performed in patients with early disease (≤2 years), established disease (>2 years), and those without tophus. Additional analysis also was performed in non-tophaceous gout patients, and patients with acute calcium pyrophosphate dihydrate crystal arthritis were used as controls. One hundred and nine gout and 74 non-gout patients participated in this study. Full ACR/EULAR classification criteria had sensitivity and specificity of 90.2 and 90.0%, respectively; and 90.2 and 85.0%, respectively, when synovial fluid microscopy was excluded. Clinical-only criteria yielded sensitivity and specificity of 79.8 and 87.8%, respectively. The criteria performed well among patients with early and non-tophaceous disease, but had lower specificity in patients with established disease. The variation of serum uric acid level was a major limitation of the classification criteria. The ACR/EULAR classification criteria had high sensitivity and specificity in Thai patients presenting with acute arthritis, even when clinical criteria alone were used.

  11. Sensitivity-Based Guided Model Calibration

    NASA Astrophysics Data System (ADS)

    Semnani, M.; Asadzadeh, M.

    2017-12-01

    A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.

  12. Diagnostic Performance of DNA Hypermethylation Markers in Peripheral Blood for the Detection of Colorectal Cancer: A Meta-Analysis and Systematic Review

    PubMed Central

    Li, Bingsheng; Gan, Aihua; Chen, Xiaolong; Wang, Xinying; He, Weifeng; Zhang, Xiaohui; Huang, Renxiang; Zhou, Shuzhu; Song, Xiaoxiao; Xu, Angao

    2016-01-01

    DNA hypermethylation in blood is becoming an attractive candidate marker for colorectal cancer (CRC) detection. To assess the diagnostic accuracy of blood hypermethylation markers for CRC in different clinical settings, we conducted a meta-analysis of published reports. Of 485 publications obtained in the initial literature search, 39 studies were included in the meta-analysis. Hypermethylation markers in peripheral blood showed a high degree of accuracy for the detection of CRC. The summary sensitivity was 0.62 [95% confidence interval (CI), 0.56–0.67] and specificity was 0.91 (95% CI, 0.89–0.93). Subgroup analysis showed significantly greater sensitivity for the methylated Septin 9 gene (SEPT9) subgroup (0.75; 95% CI, 0.67–0.81) than for the non-methylated SEPT9 subgroup (0.58; 95% CI, 0.52–0.64). Sensitivity and specificity were not affected significantly by target gene number, CRC staging, study region, or methylation analysis method. These findings show that hypermethylation markers in blood are highly sensitive and specific for CRC detection, with methylated SEPT9 being particularly robust. The diagnostic performance of hypermethylation markers, which have varied across different studies, can be improved by marker optimization. Future research should examine variation in diagnostic accuracy according to non-neoplastic factors. PMID:27158984

  13. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  14. Parametric sensitivity analysis of leachate transport simulations at landfills.

    PubMed

    Bou-Zeid, E; El-Fadel, M

    2004-01-01

    This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.

  15. Efficient sensitivity analysis method for chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao

    2016-05-01

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results in an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.

  16. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  17. Sensitive analysis of blonanserin, a novel antipsychotic agent, in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry.

    PubMed

    Ogawa, Tadashi; Hattori, Hideki; Kaneko, Rina; Ito, Kenjiro; Iwai, Masayo; Mizutani, Yoko; Arinobu, Tetsuya; Ishii, Akira; Suzuki, Osamu; Seno, Hiroshi

    2010-01-01

    A rapid and sensitive method for analysis of blonanserin in human plasma by ultra-performance liquid chromatography-tandem mass spectrometry is presented. After pretreatment of a plasma sample by solid-phase extraction, blonanserin was analyzed by the system with a C(18) column. This method gave satisfactory recovery rates, reproducibility, and good linearity of calibration curve in the range of 0.01-10.0 ng/mL for quality control samples spiked with blonanserin. The detection limit was as low as 1 pg/mL. This method seems very useful in forensic and clinical toxicology and pharmacokinetic studies.

  18. New infrastructure for studies of transmutation and fast systems concepts

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-09-01

    In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  19. A low power ADS for transmutation studies in fast systems

    NASA Astrophysics Data System (ADS)

    Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria

    2017-12-01

    In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.

  20. Cell death, perfusion and electrical parameters are critical in models of hepatic radiofrequency ablation

    PubMed Central

    Hall, Sheldon K.; Ooi, Ean H.; Payne, Stephen J.

    2015-01-01

    Abstract Purpose: A sensitivity analysis has been performed on a mathematical model of radiofrequency ablation (RFA) in the liver. The purpose of this is to identify the most important parameters in the model, defined as those that produce the largest changes in the prediction. This is important in understanding the role of uncertainty and when comparing the model predictions to experimental data. Materials and methods: The Morris method was chosen to perform the sensitivity analysis because it is ideal for models with many parameters or that take a significant length of time to obtain solutions. A comprehensive literature review was performed to obtain ranges over which the model parameters are expected to vary, crucial input information. Results: The most important parameters in predicting the ablation zone size in our model of RFA are those representing the blood perfusion, electrical conductivity and the cell death model. The size of the 50 °C isotherm is sensitive to the electrical properties of tissue while the heat source is active, and to the thermal parameters during cooling. Conclusions: The parameter ranges chosen for the sensitivity analysis are believed to represent all that is currently known about their values in combination. The Morris method is able to compute global parameter sensitivities taking into account the interaction of all parameters, something that has not been done before. Research is needed to better understand the uncertainties in the cell death, electrical conductivity and perfusion models, but the other parameters are only of second order, providing a significant simplification. PMID:26000972

  1. Predicting Fluid Responsiveness by Passive Leg Raising: A Systematic Review and Meta-Analysis of 23 Clinical Trials.

    PubMed

    Cherpanath, Thomas G V; Hirsch, Alexander; Geerts, Bart F; Lagrand, Wim K; Leeflang, Mariska M; Schultz, Marcus J; Groeneveld, A B Johan

    2016-05-01

    Passive leg raising creates a reversible increase in venous return allowing for the prediction of fluid responsiveness. However, the amount of venous return may vary in various clinical settings potentially affecting the diagnostic performance of passive leg raising. Therefore we performed a systematic meta-analysis determining the diagnostic performance of passive leg raising in different clinical settings with exploration of patient characteristics, measurement techniques, and outcome variables. PubMed, EMBASE, the Cochrane Database of Systematic Reviews, and citation tracking of relevant articles. Clinical trials were selected when passive leg raising was performed in combination with a fluid challenge as gold standard to define fluid responders and non-responders. Trials were included if data were reported allowing the extraction of sensitivity, specificity, and area under the receiver operating characteristic curve. Twenty-three studies with a total of 1,013 patients and 1,034 fluid challenges were included. The analysis demonstrated a pooled sensitivity of 86% (95% CI, 79-92), pooled specificity of 92% (95% CI, 88-96), and a summary area under the receiver operating characteristic curve of 0.95 (95% CI, 0.92-0.98). Mode of ventilation, type of fluid used, passive leg raising starting position, and measurement technique did not affect the diagnostic performance of passive leg raising. The use of changes in pulse pressure on passive leg raising showed a lower diagnostic performance when compared with passive leg raising-induced changes in flow variables, such as cardiac output or its direct derivatives (sensitivity of 58% [95% CI, 44-70] and specificity of 83% [95% CI, 68-92] vs sensitivity of 85% [95% CI, 78-90] and specificity of 92% [95% CI, 87-94], respectively; p < 0.001). Passive leg raising retains a high diagnostic performance in various clinical settings and patient groups. The predictive value of a change in pulse pressure on passive leg raising is inferior to a passive leg raising-induced change in a flow variable.

  2. The performance of the SEPT9 gene methylation assay and a comparison with other CRC screening tests: A meta-analysis.

    PubMed

    Song, Lele; Jia, Jia; Peng, Xiumei; Xiao, Wenhua; Li, Yuemin

    2017-06-08

    The SEPT9 gene methylation assay is the first FDA-approved blood assay for colorectal cancer (CRC) screening. Fecal immunochemical test (FIT), FIT-DNA test and CEA assay are also in vitro diagnostic (IVD) tests used in CRC screening. This meta-analysis aims to review the SEPT9 assay performance and compare it with other IVD CRC screening tests. By searching the Ovid MEDLINE, EMBASE, CBMdisc and CJFD database, 25 out of 180 studies were identified to report the SEPT9 assay performance. 2613 CRC cases and 6030 controls were included, and sensitivity and specificity were used to evaluate its performance at various algorithms. 1/3 algorithm exhibited the best sensitivity while 2/3 and 1/1 algorithm exhibited the best balance between sensitivity and specificity. The performance of the blood SEPT9 assay is superior to that of the serum protein markers and the FIT test in symptomatic population, while appeared to be less potent than FIT and FIT-DNA tests in asymptomatic population. In conclusion, 1/3 algorithm is recommended for CRC screening, and 2/3 or 1/1 algorithms are suitable for early detection for diagnostic purpose. The SEPT9 assay exhibited better performance in symptomatic population than in asymptomatic population.

  3. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    USDA-ARS?s Scientific Manuscript database

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...

  4. Fish oil supplementation and insulin sensitivity: a systematic review and meta-analysis.

    PubMed

    Gao, Huanqing; Geng, Tingting; Huang, Tao; Zhao, Qinghua

    2017-07-03

    Fish oil supplementation has been shown to be associated with a lower risk of metabolic syndrome and benefit a wide range of chronic diseases, such as cardiovascular disease, type 2 diabetes and several types of cancers. However, the evidence of fish oil supplementation on glucose metabolism and insulin sensitivity is still controversial. This meta-analysis summarized the exist evidence of the relationship between fish oil supplementation and insulin sensitivity and aimed to evaluate whether fish oil supplementation could improve insulin sensitivity. We searched the Cochrane Library, PubMed, Embase database for the relevant studies update to Dec 2016. Two researchers screened the literature independently by the selection and exclusion criteria. Studies were pooled using random effect models to estimate a pooled SMD and corresponding 95% CI. This meta-analysis was performed by Stata 13.1 software. A total of 17 studies with 672 participants were included in this meta-analysis study after screening from 498 published articles found after the initial search. In a pooled analysis, fish oil supplementation had no effects on insulin sensitivity compared with the placebo (SMD 0.17, 95%CI -0.15 to 0.48, p = 0.292). In subgroup analysis, fish oil supplementation could benefit insulin sensitivity among people who were experiencing at least one symptom of metabolic disorders (SMD 0.53, 95% CI 0.17 to 0.88, p < 0.001). Similarly, there were no significant differences between subgroups of methods of insulin sensitivity, doses of omega-3 polyunsaturated fatty acids (n-3 PUFA) of fish oil supplementation or duration of the intervention. The sensitivity analysis indicated that the results were robust. Short-term fish oil supplementation is associated with increasing the insulin sensitivity among those people with metabolic disorders.

  5. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  7. Results from flight and simulator studies of a Mach 3 cruise longitudinal autopilot

    NASA Technical Reports Server (NTRS)

    Gilyard, G. B.; Smith, J. W.

    1978-01-01

    At Mach numbers of approximately 3.0 and altitudes greater than 21,300 meters, the original altitude and Mach hold modes of the YF-12 autopilot produced aircraft excursions that were erratic or divergent, or both. Flight data analysis and simulator studies showed that the sensitivity of the static pressure port to angle of attack had a detrimental effect on the performance of the altitude and Mach hold modes. Good altitude hold performance was obtained when a high passed pitch rate feedback was added to compensate for angle of attack sensitivity and the altitude error and integral altitude gains were reduced. Good Mach hold performance was obtained when the angle of attack sensitivity was removed; however, the ride qualities remained poor.

  8. High-performance liquid chromatography with fluorescence detection for the rapid analysis of pheophytins and pyropheophytins in virgin olive oil.

    PubMed

    Li, Xueqi; Woodman, Michael; Wang, Selina C

    2015-08-01

    Pheophytins and pyropheophytin are degradation products of chlorophyll pigments, and their ratios can be used as a sensitive indicator of stress during the manufacturing and storage of olive oil. They increase over time depending on the storage condition and if the oil is exposed to heat treatments during the refining process. The traditional analysis method includes solvent- and time-consuming steps of solid-phase extraction followed by analysis by high-performance liquid chromatography with ultraviolet detection. We developed an improved dilute/fluorescence method where multi-step sample preparation was replaced by a simple isopropanol dilution before the high-performance liquid chromatography injection. A quaternary solvent gradient method was used to include a fourth strong solvent wash on a quaternary gradient pump, which avoided the need to premix any solvents and greatly reduced the oil residues on the column from previous analysis. This new method not only reduces analysis cost and time but shows reliability, repeatability, and improved sensitivity, especially important for low-level samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Improving multi-objective reservoir operation optimization with sensitivity-informed dimension reduction

    NASA Astrophysics Data System (ADS)

    Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.

    2015-08-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.

  10. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  11. Using Publicly Reported Nursing-Sensitive Screening Indicators to Measure Hospital Performance: The Netherlands Experience in 2011.

    PubMed

    Stalpers, Dewi; van der Linden, Dimitri; Kaljouw, Marian J; Schuurmans, Marieke J

    2016-01-01

    Deliberate screening allows detection of health risks that are otherwise not noticeable and allows expedient intervention to minimize complications and optimize outcomes, especially during critical events like hospitalization. Little research has evaluated the usefulness of screening performance and outcome indicators as measures to differentiate nursing quality, although policymakers are using them to benchmark hospitals. The aims of this study were to examine hospital performance based on nursing-sensitive screening indicators and to assess associations with hospital characteristics and nursing-sensitive outcomes for patients. A secondary use of nursing-sensitive data from the Dutch Health Care Inspectorate was performed, including the mandatory screening and outcome indicators related to delirium, malnutrition, pain and pressure ulcers. The sample consisted of all 93 hospitals in the Netherlands in 2011. High- and low-performing hospitals were determined based on the overall proportion of screened patients. Descriptive statistics and analysis of variance were used to examine screening performances in relation to hospital characteristics and nursing-sensitive outcomes. Over all hospitals, the average screening rates ranged from 59% (delirium) to 94% (pain). Organizational characteristics were not different in high- and low-performing hospitals. The hospitals with the best overall screening performances had significantly better results regarding protein intake within malnourished patients (p < .01). For mortality, marginal significant effects did not remain after controlling for organizational structures. No associations were found with prevalence of pressure ulcers and patient self-reported pain scores. The screening for patient risks is an important nursing task. Our findings suggest that nursing-sensitive screening indicators may be relevant measures for benchmarking nursing quality in hospitals. Time-trend studies are required to support our findings and to further investigate relations with nursing-sensitive outcomes.

  12. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  13. Designing Performance Measurement For Supply Chain's Actors And Regulator Using Scale Balanced Scorecard And Data Envelopment Analysis

    NASA Astrophysics Data System (ADS)

    Kusrini, Elisa; Subagyo; Aini Masruroh, Nur

    2016-01-01

    This research is a sequel of the author's earlier conducted researches in the fields of designing of integrated performance measurement between supply chain's actors and regulator. In the previous paper, the design of performance measurement is done by combining Balanced Scorecard - Supply Chain Operation Reference - Regulator Contribution model and Data Envelopment Analysis. This model referred as B-S-Rc-DEA model. The combination has the disadvantage that all the performance variables have the same weight. This paper investigates whether by giving weight to performance variables will produce more sensitive performance measurement in detecting performance improvement. Therefore, this paper discusses the development of the model B-S-Rc-DEA by giving weight to its performance'variables. This model referred as Scale B-S-Rc-DEA model. To illustrate the model of development, some samples from small medium enterprises of leather craft industry supply chain in province of Yogyakarta, Indonesia are used in this research. It is found that Scale B-S-Rc-DEA model is more sensitive to detecting performance improvement than B-S- Rc-DEA model.

  14. The Deep Space Network: Noise temperature concepts, measurements, and performance

    NASA Technical Reports Server (NTRS)

    Stelzried, C. T.

    1982-01-01

    The use of higher operational frequencies is being investigated for improved performance of the Deep Space Network. Noise temperature and noise figure concepts are used to describe the noise performance of these receiving systems. The ultimate sensitivity of a linear receiving system is limited by the thermal noise of the source and the quantum noise of the receiver amplifier. The atmosphere, antenna and receiver amplifier of an Earth station receiving system are analyzed separately and as a system. Performance evaluation and error analysis techniques are investigated. System noise temperature and antenna gain parameters are combined to give an overall system figure of merit G/T. Radiometers are used to perform radio ""star'' antenna and system sensitivity calibrations. These are analyzed and the performance of several types compared to an idealized total power radiometer. The theory of radiative transfer is applicable to the analysis of transmission medium loss. A power series solution in terms of the transmission medium loss is given for the solution of the noise temperature contribution.

  15. Heat-Energy Analysis for Solar Receivers

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1982-01-01

    Heat-energy analysis program (HEAP) solves general heat-transfer problems, with some specific features that are "custom made" for analyzing solar receivers. Can be utilized not only to predict receiver performance under varying solar flux, ambient temperature and local heat-transfer rates but also to detect locations of hotspots and metallurgical difficulties and to predict performance sensitivity of neighboring component parameters.

  16. Evaluation of the AnnAGNPS Model for Predicting Runoff and Nutrient Export in a Typical Small Watershed in the Hilly Region of Taihu Lake

    PubMed Central

    Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin

    2015-01-01

    The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642

  17. Sensitive zone parameters and curvature radius evaluation for polymer optical fiber curvature sensors

    NASA Astrophysics Data System (ADS)

    Leal-Junior, Arnaldo G.; Frizera, Anselmo; José Pontes, Maria

    2018-03-01

    Polymer optical fibers (POFs) are suitable for applications such as curvature sensors, strain, temperature, liquid level, among others. However, for enhancing sensitivity, many polymer optical fiber curvature sensors based on intensity variation require a lateral section. Lateral section length, depth, and surface roughness have great influence on the sensor sensitivity, hysteresis, and linearity. Moreover, the sensor curvature radius increase the stress on the fiber, which leads on variation of the sensor behavior. This paper presents the analysis relating the curvature radius and lateral section length, depth and surface roughness with the sensor sensitivity, hysteresis and linearity for a POF curvature sensor. Results show a strong correlation between the decision parameters behavior and the performance for sensor applications based on intensity variation. Furthermore, there is a trade-off among the sensitive zone length, depth, surface roughness, and curvature radius with the sensor desired performance parameters, which are minimum hysteresis, maximum sensitivity, and maximum linearity. The optimization of these parameters is applied to obtain a sensor with sensitivity of 20.9 mV/°, linearity of 0.9992 and hysteresis below 1%, which represent a better performance of the sensor when compared with the sensor without the optimization.

  18. Deterministic Local Sensitivity Analysis of Augmented Systems - II: Applications to the QUENCH-04 Experiment Using the RELAP5/MOD3.2 Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ionescu-Bujor, Mihaela; Jin Xuezhou; Cacuci, Dan G.

    2005-09-15

    The adjoint sensitivity analysis procedure for augmented systems for application to the RELAP5/MOD3.2 code system is illustrated. Specifically, the adjoint sensitivity model corresponding to the heat structure models in RELAP5/MOD3.2 is derived and subsequently augmented to the two-fluid adjoint sensitivity model (ASM-REL/TF). The end product, called ASM-REL/TFH, comprises the complete adjoint sensitivity model for the coupled fluid dynamics/heat structure packages of the large-scale simulation code RELAP5/MOD3.2. The ASM-REL/TFH model is validated by computing sensitivities to the initial conditions for various time-dependent temperatures in the test bundle of the Quench-04 reactor safety experiment. This experiment simulates the reflooding with water ofmore » uncovered, degraded fuel rods, clad with material (Zircaloy-4) that has the same composition and size as that used in typical pressurized water reactors. The most important response for the Quench-04 experiment is the time evolution of the cladding temperature of heated fuel rods. The ASM-REL/TFH model is subsequently used to perform an illustrative sensitivity analysis of this and other time-dependent temperatures within the bundle. The results computed by using the augmented adjoint sensitivity system, ASM-REL/TFH, highlight the reliability, efficiency, and usefulness of the adjoint sensitivity analysis procedure for computing time-dependent sensitivities.« less

  19. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  20. Discrete analysis of spatial-sensitivity models

    NASA Technical Reports Server (NTRS)

    Nielsen, Kenneth R. K.; Wandell, Brian A.

    1988-01-01

    Procedures for reducing the computational burden of current models of spatial vision are described, the simplifications being consistent with the prediction of the complete model. A method for using pattern-sensitivity measurements to estimate the initial linear transformation is also proposed which is based on the assumption that detection performance is monotonic with the vector length of the sensor responses. It is shown how contrast-threshold data can be used to estimate the linear transformation needed to characterize threshold performance.

  1. 3D Simulations of Void collapse in Energetic Materials

    NASA Astrophysics Data System (ADS)

    Rai, Nirmal Kumar; Udaykumar, H. S.

    2017-06-01

    Voids present in the microstructure of heterogeneous energetic materials effect the sensitivity towards ignition. It is established that the morphology of voids can play a significant role in sensitivity enhancement of energetic materials. Depending on the void shape, sensitivity can be either increased or decreased under given loading conditions. In the past, effects of different void shapes i.e. triangular, ellipse, cylindrical etc. on the sensitivity of energetic materials have been analyzed. However, most of these studies are performed in 2D and are limited under the plain strain assumption. Axisymmetric studies have also been performed in the past to incorporate the 3D effects, however axisymmetric modeling is limited to only certain geometries i.e. sphere. This work analyzes the effects of various void shapes in three dimensions on the ignition behavior of HMX. Various void shapes are analyzed including spherical, prolate and oblate speheroid oriented at different orientations, etc. Three dimensional void collapse simulations are performed on a single void to quantify the effects void morphology on initiation. A Cartesian grid based Eulerian solver SCIMITAR3D is used to perform the void collapse simulations. Various aspects of void morphology i.e. size, thickness of voids, elongation, orientation etc. are considered to obtain a comprehensive analysis. Also, 2D plane strain calculations are compared with the three dimensional analysis to evaluate the salient differences between 2D and 3D modeling.

  2. A parametric sensitivity study for single-stage-to-orbit hypersonic vehicles using trajectory optimization

    NASA Astrophysics Data System (ADS)

    Lovell, T. Alan; Schmidt, D. K.

    1994-03-01

    The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.

  3. A parametric sensitivity study for single-stage-to-orbit hypersonic vehicles using trajectory optimization

    NASA Technical Reports Server (NTRS)

    Lovell, T. Alan; Schmidt, D. K.

    1994-01-01

    The class of hypersonic vehicle configurations with single stage-to-orbit (SSTO) capability reflect highly integrated airframe and propulsion systems. These designs are also known to exhibit a large degree of interaction between the airframe and engine dynamics. Consequently, even simplified hypersonic models are characterized by tightly coupled nonlinear equations of motion. In addition, hypersonic SSTO vehicles present a major system design challenge; the vehicle's overall mission performance is a function of its subsystem efficiencies including structural, aerodynamic, propulsive, and operational. Further, all subsystem efficiencies are interrelated, hence, independent optimization of the subsystems is not likely to lead to an optimum design. Thus, it is desired to know the effect of various subsystem efficiencies on overall mission performance. For the purposes of this analysis, mission performance will be measured in terms of the payload weight inserted into orbit. In this report, a trajectory optimization problem is formulated for a generic hypersonic lifting body for a specified orbit-injection mission. A solution method is outlined, and results are detailed for the generic vehicle, referred to as the baseline model. After evaluating the performance of the baseline model, a sensitivity study is presented to determine the effect of various subsystem efficiencies on mission performance. This consists of performing a parametric analysis of the basic design parameters, generating a matrix of configurations, and determining the mission performance of each configuration. Also, the performance loss due to constraining the total head load experienced by the vehicle is evaluated. The key results from this analysis include the formulation of the sizing problem for this vehicle class using trajectory optimization, characteristics of the optimal trajectories, and the subsystem design sensitivities.

  4. HCIT Contrast Performance Sensitivity Studies: Simulation Versus Experiment

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Shaklan, Stuart; Krist, John; Cady, Eric J.; Kern, Brian; Balasubramanian, Kunjithapatham

    2013-01-01

    Using NASA's High Contrast Imaging Testbed (HCIT) at the Jet Propulsion Laboratory, we have experimentally investigated the sensitivity of dark hole contrast in a Lyot coronagraph for the following factors: 1) Lateral and longitudinal translation of an occulting mask; 2) An opaque spot on the occulting mask; 3) Sizes of the controlled dark hole area. Also, we compared the measured results with simulations obtained using both MACOS (Modeling and Analysis for Controlled Optical Systems) and PROPER optical analysis programs with full three-dimensional near-field diffraction analysis to model HCIT's optical train and coronagraph.

  5. The diagnostic performance of shear-wave elastography for liver fibrosis in children and adolescents: A systematic review and diagnostic meta-analysis.

    PubMed

    Kim, Jeong Rye; Suh, Chong Hyun; Yoon, Hee Mang; Lee, Jin Seong; Cho, Young Ah; Jung, Ah Young

    2018-03-01

    To assess the diagnostic performance of shear-wave elastography for determining the severity of liver fibrosis in children and adolescents. An electronic literature search of PubMed and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver-operating-characteristic modelling were performed to evaluate the diagnostic performance of shear-wave elastography. Meta-regression and subgroup analyses according to the modality of shear-wave imaging and the degree of liver fibrosis were also performed. Twelve eligible studies with 550 patients were included. Shear-wave elastography showed a summary sensitivity of 81 % (95 % CI: 71-88) and a specificity of 91 % (95 % CI: 83-96) for the prediction of significant liver fibrosis. The number of measurements of shear-wave elastography performed was a significant factor influencing study heterogeneity. Subgroup analysis revealed shear-wave elastography to have an excellent diagnostic performance according to each degree of liver fibrosis. Supersonic shear imaging (SSI) had a higher sensitivity (p<.01) and specificity (p<.01) than acoustic radiation force impulse imaging (ARFI). Shear-wave elastography is an excellent modality for the evaluation of the severity of liver fibrosis in children and adolescents. Compared with ARFI, SSI showed better diagnostic performance for prediction of significant liver fibrosis. • Shear-wave elastography is beneficial for determining liver fibrosis severity in children. • Shear-wave elastography showed summary sensitivity of 81 %, specificity of 91 %. • SSI showed better diagnostic performance than ARFI for significant liver fibrosis.

  6. Statistical sensitivity analysis of a simple nuclear waste repository model

    NASA Astrophysics Data System (ADS)

    Ronen, Y.; Lucius, J. L.; Blow, E. M.

    1980-06-01

    A preliminary step in a comprehensive sensitivity analysis of the modeling of a nuclear waste repository. The purpose of the complete analysis is to determine which modeling parameters and physical data are most important in determining key design performance criteria and then to obtain the uncertainty in the design for safety considerations. The theory for a statistical screening design methodology is developed for later use in the overall program. The theory was applied to the test case of determining the relative importance of the sensitivity of near field temperature distribution in a single level salt repository to modeling parameters. The exact values of the sensitivities to these physical and modeling parameters were then obtained using direct methods of recalculation. The sensitivity coefficients found to be important for the sample problem were thermal loading, distance between the spent fuel canisters and their radius. Other important parameters were those related to salt properties at a point of interest in the repository.

  7. Systems Analysis Of Advanced Coal-Based Power Plants

    NASA Technical Reports Server (NTRS)

    Ferrall, Joseph F.; Jennings, Charles N.; Pappano, Alfred W.

    1988-01-01

    Report presents appraisal of integrated coal-gasification/fuel-cell power plants. Based on study comparing fuel-cell technologies with each other and with coal-based alternatives and recommends most promising ones for research and development. Evaluates capital cost, cost of electricity, fuel consumption, and conformance with environmental standards. Analyzes sensitivity of cost of electricity to changes in fuel cost, to economic assumptions, and to level of technology. Recommends further evaluation of integrated coal-gasification/fuel-cell integrated coal-gasification/combined-cycle, and pulverized-coal-fired plants. Concludes with appendixes detailing plant-performance models, subsystem-performance parameters, performance goals, cost bases, plant-cost data sheets, and plant sensitivity to fuel-cell performance.

  8. Radiological performance assessment for the E-Area Vaults Disposal Facility. Appendices A through M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, J.R.

    1994-04-15

    These document contains appendices A-M for the performance assessment. They are A: details of models and assumptions, B: computer codes, C: data tabulation, D: geochemical interactions, E: hydrogeology of the Savannah River Site, F: software QA plans, G: completeness review guide, H: performance assessment peer review panel recommendations, I: suspect soil performance analysis, J: sensitivity/uncertainty analysis, K: vault degradation study, L: description of naval reactor waste disposal, M: porflow input file. (GHH)

  9. A comparison of computer-assisted detection (CAD) programs for the identification of colorectal polyps: performance and sensitivity analysis, current limitations and practical tips for radiologists.

    PubMed

    Bell, L T O; Gandhi, S

    2018-06-01

    To directly compare the accuracy and speed of analysis of two commercially available computer-assisted detection (CAD) programs in detecting colorectal polyps. In this retrospective single-centre study, patients who had colorectal polyps identified on computed tomography colonography (CTC) and subsequent lower gastrointestinal endoscopy, were analysed using two commercially available CAD programs (CAD1 and CAD2). Results were compared against endoscopy to ascertain sensitivity and positive predictive value (PPV) for colorectal polyps. Time taken for CAD analysis was also calculated. CAD1 demonstrated a sensitivity of 89.8%, PPV of 17.6% and mean analysis time of 125.8 seconds. CAD2 demonstrated a sensitivity of 75.5%, PPV of 44.0% and mean analysis time of 84.6 seconds. The sensitivity and PPV for colorectal polyps and CAD analysis times can vary widely between current commercially available CAD programs. There is still room for improvement. Generally, there is a trade-off between sensitivity and PPV, and so further developments should aim to optimise both. Information on these factors should be made routinely available, so that an informed choice on their use can be made. This information could also potentially influence the radiologist's use of CAD results. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  10. Modeling and sensitivity analysis of mass transfer in active multilayer polymeric film for food applications

    NASA Astrophysics Data System (ADS)

    Bedane, T.; Di Maio, L.; Scarfato, P.; Incarnato, L.; Marra, F.

    2015-12-01

    The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values of poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.

  11. Modeling and sensitivity analysis of mass transfer in active multilayer polymeric film for food applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bedane, T.; Di Maio, L.; Scarfato, P.

    The barrier performance of multilayer polymeric films for food applications has been significantly improved by incorporating oxygen scavenging materials. The scavenging activity depends on parameters such as diffusion coefficient, solubility, concentration of scavenger loaded and the number of available reactive sites. These parameters influence the barrier performance of the film in different ways. Virtualization of the process is useful to characterize, design and optimize the barrier performance based on physical configuration of the films. Also, the knowledge of values of parameters is important to predict the performances. Inverse modeling and sensitivity analysis are sole way to find reasonable values ofmore » poorly defined, unmeasured parameters and to analyze the most influencing parameters. Thus, the objective of this work was to develop a model to predict barrier properties of multilayer film incorporated with reactive layers and to analyze and characterize their performances. Polymeric film based on three layers of Polyethylene terephthalate (PET), with a core reactive layer, at different thickness configurations was considered in the model. A one dimensional diffusion equation with reaction was solved numerically to predict the concentration of oxygen diffused into the polymer taking into account the reactive ability of the core layer. The model was solved using commercial software for different film layer configurations and sensitivity analysis based on inverse modeling was carried out to understand the effect of physical parameters. The results have shown that the use of sensitivity analysis can provide physical understanding of the parameters which highly affect the gas permeation into the film. Solubility and the number of available reactive sites were the factors mainly influencing the barrier performance of three layered polymeric film. Multilayer films slightly modified the steady transport properties in comparison to net PET, giving a small reduction in the permeability and oxygen transfer rate values. Scavenging capacity of the multilayer film increased linearly with the increase of the reactive layer thickness and the oxygen absorption reaction at short times decreased proportionally with the thickness of the external PET layer.« less

  12. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  13. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Undisturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-19

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment for the Waste Isolation Pilot Plant are presented for two-phase flow the vicinity of the repository under undisturbed conditions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformation are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure is potentially the most important due to its influence on spallings and direct brine releases, with the uncertainty in its value being dominated by the extent to whichmore » the microbial degradation of cellulose takes place, the rate at which the corrosion of steel takes place, and the amount of brine that drains from the surrounding disturbed rock zone into the repository.« less

  14. Mid-L/D Lifting Body Entry Demise Analysis

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2017-01-01

    The mid-lift-to-drag ratio (mid-L/D) lifting body is a fully autonomous spacecraft under design at NASA for enabling a rapid return of scientific payloads from the International Space Station (ISS). For contingency planning and risk assessment for the Earth-return trajectory, an entry demise analysis was performed to examine three potential failure scenarios: (1) nominal entry interface conditions with loss of control, (2) controlled entry at maximum flight path angle, and (3) controlled entry at minimum flight path angle. The objectives of the analysis were to predict the spacecraft breakup sequence and timeline, determine debris survival, and calculate the debris dispersion footprint. Sensitivity analysis was also performed to determine the effect of the initial pitch rate on the spacecraft stability and breakup during the entry. This report describes the mid-L/D lifting body and presents the results of the entry demise and sensitivity analyses.

  15. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  16. Associations of water balance and thermal sensitivity of toads with macroclimatic characteristics of geographical distribution.

    PubMed

    Titon, Braz; Gomes, Fernando Ribeiro

    2017-06-01

    Interspecific variation in patterns of geographical distribution of phylogenetically related species of amphibians might be related to physiological adaptation to different climatic conditions. In this way, a comparative study of resistance to evaporative water loss, rehydration rates and sensitivity of locomotor performance to variations on hydration level and temperature was performed for five species of Bufonidae toads (Rhinella granulosa, R. jimi, R. ornata, R. schneideri and R. icterica) inhabiting different Brazilian biomes. The hypotheses tested were that, when compared to species inhabiting mesic environments, species living at hot and dry areas would show: (1) greater resistance to evaporative water loss, (2) higher rates of water uptake, (3) lower sensitivity of locomotor performance to dehydration and (4) lower sensitivity of locomotor performance at higher temperatures and higher sensitivity of locomotor performance at lower temperatures. This comparative analysis showed relations between body mass and interspecific variation in rehydration rates and resistance to evaporative water loss in opposite directions. These results might represent a functional compensation associated with relatively lower absorption areas in larger toads and higher evaporative areas in smaller ones. Moreover, species from the semi-arid Caatinga showed locomotor performance less sensitive to dehydration but highly affected by lower temperatures, as well greater resistance to evaporative water loss, when compared to the other species from the mesic Atlantic Forest and the savannah-like area called Cerrado. These results suggest adaptation patterns to environmental conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  18. Sensitivity analysis of multi-objective optimization of CPG parameters for quadruped robot locomotion

    NASA Astrophysics Data System (ADS)

    Oliveira, Miguel; Santos, Cristina P.; Costa, Lino

    2012-09-01

    In this paper, a study based on sensitivity analysis is performed for a gait multi-objective optimization system that combines bio-inspired Central Patterns Generators (CPGs) and a multi-objective evolutionary algorithm based on NSGA-II. In this system, CPGs are modeled as autonomous differential equations, that generate the necessary limb movement to perform the required walking gait. In order to optimize the walking gait, a multi-objective problem with three conflicting objectives is formulated: maximization of the velocity, the wide stability margin and the behavioral diversity. The experimental results highlight the effectiveness of this multi-objective approach and the importance of the objectives to find different walking gait solutions for the quadruped robot.

  19. A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method

    NASA Astrophysics Data System (ADS)

    Chen, Leilei; Zheng, Changjun; Chen, Haibo

    2013-09-01

    This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.

  20. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  1. Head-To-Head Comparison Between High- and Standard-b-Value DWI for Detecting Prostate Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Kim, Sang Youn; Cho, Jeong Yeon; Kim, Seung Hyup

    2018-01-01

    The purpose of this study was to perform a head-to-head comparison between high-b-value (> 1000 s/mm 2 ) and standard-b-value (800-1000 s/mm 2 ) DWI regarding diagnostic performance in the detection of prostate cancer. The MEDLINE and EMBASE databases were searched up to April 1, 2017. The analysis included diagnostic accuracy studies in which high- and standard-b-value DWI were used for prostate cancer detection with histopathologic examination as the reference standard. Methodologic quality was assessed with the revised Quality Assessment of Diagnostic Accuracy Studies tool. Sensitivity and specificity of all studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Meta-regression and multiple-subgroup analyses were performed to compare the diagnostic performances of high- and standard-b-value DWI. Eleven studies (789 patients) were included. High-b-value DWI had greater pooled sensitivity (0.80 [95% CI, 0.70-0.87]) (p = 0.03) and specificity (0.92 [95% CI, 0.87-0.95]) (p = 0.01) than standard-b-value DWI (sensitivity, 0.78 [95% CI, 0.66-0.86]); specificity, 0.87 [95% CI, 0.77-0.93] (p < 0.01). Multiple-subgroup analyses showed that specificity was consistently higher for high- than for standard-b-value DWI (p ≤ 0.05). Sensitivity was significantly higher for high- than for standard-b-value DWI only in the following subgroups: peripheral zone only, transition zone only, multiparametric protocol (DWI and T2-weighted imaging), visual assessment of DW images, and per-lesion analysis (p ≤ 0.04). In a head-to-head comparison, high-b-value DWI had significantly better sensitivity and specificity for detection of prostate cancer than did standard-b-value DWI. Multiple-subgroup analyses showed that specificity was consistently superior for high-b-value DWI.

  2. A Meta-analysis for the Diagnostic Performance of Transient Elastography for Clinically Significant Portal Hypertension.

    PubMed

    You, Myung-Won; Kim, Kyung Won; Pyo, Junhee; Huh, Jimi; Kim, Hyoung Jung; Lee, So Jung; Park, Seong Ho

    2017-01-01

    We aimed to evaluate the correlation between liver stiffness measurement using transient elastography (TE-LSM) and hepatic venous pressure gradient and the diagnostic performance of TE-LSM in assessing clinically significant portal hypertension through meta-analysis. Eleven studies were included from thorough literature research and selection processes. The summary correlation coefficient was 0.783 (95% confidence interval [CI], 0.737-0.823). Summary sensitivity, specificity and area under the hierarchical summary receiver operating characteristic curve (AUC) were 87.5% (95% CI, 75.8-93.9%), 85.3 % (95% CI, 76.9-90.9%) and 0.9, respectively. The subgroup with low cut-off values of 13.6-18 kPa had better summary estimates (sensitivity 91.2%, specificity 81.3% and partial AUC 0.921) than the subgroup with high cut-off values of 21-25 kPa (sensitivity 71.2%, specificity 90.9% and partial AUC 0.769). In summary, TE-LSM correlated well with hepatic venous pressure gradient and represented good diagnostic performance in diagnosing clinically significant portal hypertension. For use as a sensitive screening tool, we propose using low cut-off values of 13.6-18 kPa in TE-LSM. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  3. Sensitivity and Uncertainty Analysis of the GFR MOX Fuel Subassembly

    NASA Astrophysics Data System (ADS)

    Lüley, J.; Vrban, B.; Čerba, Š.; Haščík, J.; Nečas, V.; Pelloni, S.

    2014-04-01

    We performed sensitivity and uncertainty analysis as well as benchmark similarity assessment of the MOX fuel subassembly designed for the Gas-Cooled Fast Reactor (GFR) as a representative material of the core. Material composition was defined for each assembly ring separately allowing us to decompose the sensitivities not only for isotopes and reactions but also for spatial regions. This approach was confirmed by direct perturbation calculations for chosen materials and isotopes. Similarity assessment identified only ten partly comparable benchmark experiments that can be utilized in the field of GFR development. Based on the determined uncertainties, we also identified main contributors to the calculation bias.

  4. An alternative respiratory sounds classification system utilizing artificial neural networks.

    PubMed

    Oweis, Rami J; Abdulhay, Enas W; Khayal, Amer; Awad, Areen

    2015-01-01

    Computerized lung sound analysis involves recording lung sound via an electronic device, followed by computer analysis and classification based on specific signal characteristics as non-linearity and nonstationarity caused by air turbulence. An automatic analysis is necessary to avoid dependence on expert skills. This work revolves around exploiting autocorrelation in the feature extraction stage. All process stages were implemented in MATLAB. The classification process was performed comparatively using both artificial neural networks (ANNs) and adaptive neuro-fuzzy inference systems (ANFIS) toolboxes. The methods have been applied to 10 different respiratory sounds for classification. The ANN was superior to the ANFIS system and returned superior performance parameters. Its accuracy, specificity, and sensitivity were 98.6%, 100%, and 97.8%, respectively. The obtained parameters showed superiority to many recent approaches. The promising proposed method is an efficient fast tool for the intended purpose as manifested in the performance parameters, specifically, accuracy, specificity, and sensitivity. Furthermore, it may be added that utilizing the autocorrelation function in the feature extraction in such applications results in enhanced performance and avoids undesired computation complexities compared to other techniques.

  5. Texture analysis of pulmonary parenchymateous changes related to pulmonary thromboembolism in dogs - a novel approach using quantitative methods.

    PubMed

    Marschner, C B; Kokla, M; Amigo, J M; Rozanski, E A; Wiinberg, B; McEvoy, F J

    2017-07-11

    Diagnosis of pulmonary thromboembolism (PTE) in dogs relies on computed tomography pulmonary angiography (CTPA), but detailed interpretation of CTPA images is demanding for the radiologist and only large vessels may be evaluated. New approaches for better detection of smaller thrombi include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between -1024 and -250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were performed on texture parameters extracted from these images. Leave-one-dog-out cross validation and receiver operator characteristic (ROC) showed that the models generated from the texture analysis were able to predict healthy dogs with optimal levels of performance. Partial Least Square Discriminant Analysis (PLS-DA) obtained a sensitivity of 94% and a specificity of 96%, while Support Vector Machines (SVM) yielded a sensitivity of 99% and a specificity of 100%. The models, however, performed worse in classifying the type of disease in the diseased dog group: In diseased dogs with PTE sensitivities were 30% (PLS-DA) and 38% (SVM), and specificities were 80% (PLS-DA) and 89% (SVM). In diseased dogs without PTE the sensitivities of the models were 59% (PLS-DA) and 79% (SVM) and specificities were 79% (PLS-DA) and 82% (SVM). The results indicate that texture analysis of CTPA images using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma of healthy dogs. The models' poorer performance in classifying dogs within the diseased group, may be related to the low number of dogs compared to texture variables, a lack of balanced number of dogs within each group or a real lack of difference in the texture features among the diseased dogs.

  6. Lunar Exploration Architecture Level Key Drivers and Sensitivities

    NASA Technical Reports Server (NTRS)

    Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher

    2009-01-01

    Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, C. S.; Zhang, Hongbin

    Uncertainty quantification and sensitivity analysis are important for nuclear reactor safety design and analysis. A 2x2 fuel assembly core design was developed and simulated by the Virtual Environment for Reactor Applications, Core Simulator (VERA-CS) coupled neutronics and thermal-hydraulics code under development by the Consortium for Advanced Simulation of Light Water Reactors (CASL). An approach to uncertainty quantification and sensitivity analysis with VERA-CS was developed and a new toolkit was created to perform uncertainty quantification and sensitivity analysis with fourteen uncertain input parameters. Furthermore, the minimum departure from nucleate boiling ratio (MDNBR), maximum fuel center-line temperature, and maximum outer clad surfacemore » temperature were chosen as the selected figures of merit. Pearson, Spearman, and partial correlation coefficients were considered for all of the figures of merit in sensitivity analysis and coolant inlet temperature was consistently the most influential parameter. We used parameters as inputs to the critical heat flux calculation with the W-3 correlation were shown to be the most influential on the MDNBR, maximum fuel center-line temperature, and maximum outer clad surface temperature.« less

  8. Validation of the colour difference plot scoring system analysis of the 103 hexagon multifocal electroretinogram in the evaluation of hydroxychloroquine retinal toxicity.

    PubMed

    Graves, Gabrielle S; Adam, Murtaza K; Stepien, Kimberly E; Han, Dennis P

    2014-08-01

    To evaluate sensitivity, specificity and reproducibility of colour difference plot analysis (CDPA) of 103 hexagon multifocal electroretinogram (mfERG) in detecting established hydroxychloroquine (HCQ) retinal toxicity. Twenty-three patients taking HCQ were divided into those with and without retinal toxicity and were compared with a control group without retinal disease and not taking HCQ. CDPA with two masked examiners was performed using age-corrected mfERG responses in the central ring (Rc ; 0-5.5 degrees from fixation) and paracentral ring (Rp ; 5.5-11 degrees from fixation). An abnormal ring was defined as containing any hexagons with a difference in two or more standard deviations from normal (colour blue or black). Categorical analysis (ring involvement or not) showed Rc had 83% sensitivity and 93% specificity. Rp had 89% sensitivity and 82% specificity. Requiring abnormal hexagons in both Rc and Rp yielded sensitivity and specificity of 83% and 95%, respectively. If required in only one ring, they were 89% and 80%, respectively. In this population, there was complete agreement in identifying toxicity when comparing CDPA using Rp with ring ratio analysis using R5/R4 P1 ring responses (89% sensitivity and 95% specificity). Continuous analysis of CDPA with receiver operating characteristic analysis showed optimized detection (83% sensitivity and 96% specificity) when ≥4 abnormal hexagons were present anywhere within the Rp ring outline. Intergrader agreement and reproducibility were good. Colour difference plot analysis had sensitivity and specificity that approached that of ring ratio analysis of R5/R4 P₁ responses. Ease of implementation and reproducibility are notable advantages of CDPA. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  9. Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance

    NASA Technical Reports Server (NTRS)

    Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.

    2016-01-01

    Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.

  10. Assessment of bioethanol yield by S. cerevisiae grown on oil palm residues: Monte Carlo simulation and sensitivity analysis.

    PubMed

    Samsudin, Mohd Dinie Muhaimin; Mat Don, Mashitah

    2015-01-01

    Oil palm trunk (OPT) sap was utilized for growth and bioethanol production by Saccharomycescerevisiae with addition of palm oil mill effluent (POME) as nutrients supplier. Maximum yield (YP/S) was attained at 0.464g bioethanol/g glucose presence in the OPT sap-POME-based media. However, OPT sap and POME are heterogeneous in properties and fermentation performance might change if it is repeated. Contribution of parametric uncertainty analysis on bioethanol fermentation performance was then assessed using Monte Carlo simulation (stochastic variable) to determine probability distributions due to fluctuation and variation of kinetic model parameters. Results showed that based on 100,000 samples tested, the yield (YP/S) ranged 0.423-0.501g/g. Sensitivity analysis was also done to evaluate the impact of each kinetic parameter on the fermentation performance. It is found that bioethanol fermentation highly depend on growth of the tested yeast. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Detection of somatic mutations by high-resolution DNA melting (HRM) analysis in multiple cancers.

    PubMed

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S; Garcia-Closas, Montserrat; Sherman, Mark E; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P; Khan, Javed; Chanock, Stephen

    2011-01-17

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  12. Detection of Somatic Mutations by High-Resolution DNA Melting (HRM) Analysis in Multiple Cancers

    PubMed Central

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S.; Garcia-Closas, Montserrat; Sherman, Mark E.; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P.; Khan, Javed; Chanock, Stephen

    2011-01-01

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples. PMID:21264207

  13. Evaluation of quantitative image analysis criteria for the high-resolution microendoscopic detection of neoplasia in Barrett's esophagus

    NASA Astrophysics Data System (ADS)

    Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2010-03-01

    Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.

  14. Accuracy of i-Scan for Optical Diagnosis of Colonic Polyps: A Meta-Analysis

    PubMed Central

    Guo, Chuan-Guo; Ji, Rui; Li, Yan-Qing

    2015-01-01

    Background i-Scan is a novel virtual chromoendoscopy system designed to enhance surface and vascular patterns to improve optical diagnostic performance. Numerous prospective studies have been done to evaluate the accuracy of i-Scan in differentiating colonic neoplasms from non-neoplasms. i-Scan could be an effective endoscopic technique for optical diagnosis of colonic polyps. Objective Our aim of this study was to perform a meta-analysis of published data to establish the diagnostic accuracy of i-Scan for optical diagnosis of colonic polyps. Methods We searched PubMed, Medline, Elsevier ScienceDirect and Cochrane Library databases. We used a bivariate meta-analysis following a random effects model to summarize the data and plotted hierarchical summary receiver-operating characteristic (HSROC) curves. The area under the HSROC curve (AUC) serves as an indicator of the diagnostic accuracy. Results The meta-analysis included a total of 925 patients and 2312 polyps. For the overall studies, the area under the HSROC curve was 0.96. The summary sensitivity was 90.4% (95%CI 85%-94.1%) and specificity was 90.9% (95%CI 84.3%-94.9%). In 11 studies predicting polyps histology in real-time, the summary sensitivity and specificity was 91.5% (95%CI 85.7%-95.1%) and 92.1% (95%CI 84.5%-96.1%), respectively, with the AUC of 0.97. For three different diagnostic criteria (Kudo, NICE, others), the sensitivity was 86.3%, 93.0%, 85.0%, respectively and specificity was 84.8%, 94.4%, 91.8%, respectively. Conclusions Endoscopic diagnosis with i-Scan has accurate optical diagnostic performance to differentiate neoplastic from non-neoplastic polyps with an area under the HSROC curve exceeding 0.90. Both the sensitivity and specificity for diagnosing colonic polyps are over 90%. PMID:25978459

  15. Screening Performance Characteristic of Ultrasonography and Radiography in Detection of Pleural Effusion; a Meta-Analysis.

    PubMed

    Yousefifard, Mahmoud; Baikpour, Masoud; Ghelichkhani, Parisa; Asady, Hadi; Shahsavari Nia, Kavous; Moghadas Jafari, Ali; Hosseini, Mostafa; Safari, Saeed

    2016-01-01

    The role of ultrasonography in detection of pleural effusion has long been a subject of interest but controversial results have been reported. Accordingly, this study aims to conduct a systematic review of the available literature on diagnostic value of ultrasonography and radiography in detection of pleural effusion through a meta-analytic approach. An extended search was done in databases of Medline, EMBASE, ISI Web of Knowledge, Scopus, Cochrane Library, and ProQuest. Two reviewers independently extracted the data and assessed the quality of the articles. Meta-analysis was performed using a mixed-effects binary regression model. Finally, subgroup analysis was carried out in order to find the sources of heterogeneity between the included studies. 12 studies were included in this meta-analysis (1554 subjects, 58.6% male). Pooled sensitivity of ultrasonography in detection of pleural effusion was 0.94 (95% CI: 0.88-0.97; I2= 84.23, p<0.001) and its pooled specificity was calculated to be 0.98 (95% CI: 0.92-1.0; I2= 88.65, p<0.001), while sensitivity and specificity of chest radiography were 0.51 (95% CI: 0.33-0.68; I2= 91.76, p<0.001) and 0.91 (95% CI: 0.68-0.98; I2= 92.86, p<0.001), respectively. Sensitivity of ultrasonography was found to be higher when the procedure was carried out by an intensivist or a radiologist using 5-10 MHz transducers. Chest ultrasonography, as a screening tool, has a higher diagnostic accuracy in identification of plural effusion compared to radiography. The sensitivity of this imaging modality was found to be higher when performed by a radiologist or an intensivist and using 5-10MHz probes.

  16. A retrospective analysis of preoperative staging modalities for oral squamous cell carcinoma.

    PubMed

    Kähling, Ch; Langguth, T; Roller, F; Kroll, T; Krombach, G; Knitschke, M; Streckbein, Ph; Howaldt, H P; Wilbrand, J-F

    2016-12-01

    An accurate preoperative assessment of cervical lymph node status is a prerequisite for individually tailored cancer therapies in patients with oral squamous cell carcinoma. The detection of malignant spread and its treatment crucially influence the prognosis. The aim of the present study was to analyze the different staging modalities used among patients with a diagnosis of primary oral squamous cell carcinoma between 2008 and 2015. An analysis of preoperative staging findings, collected by clinical palpation, ultrasound, and computed tomography (CT), was performed. The results obtained were compared with the results of the final histopathological findings of the neck dissection specimens. A statistical analysis using McNemar's test was performed. The sensitivity of CT for the detection of malignant cervical tumor spread was 74.5%. The ultrasound obtained a sensitivity of 60.8%. Both CT and ultrasound demonstrated significantly enhanced sensitivity compared to the clinical palpation with a sensitivity of 37.1%. No significant difference was observed between CT and ultrasound. A combination of different staging modalities increased the sensitivity significantly compared with ultrasound staging alone. No significant difference in sensitivity was found between the combined use of different staging modalities and CT staging alone. The highest sensitivity, of 80.0%, was obtained by a combination of all three staging modalities: clinical palpation, ultrasound and CT. The present study indicates that CT has an essential role in the preoperative staging of patients with oral squamous cell carcinoma. Its use not only significantly increases the sensitivity of cervical lymph node metastasis detection but also offers a preoperative assessment of local tumor spread and resection borders. An additional non-invasive cervical lymph node examination increases the sensitivity of the tumor staging process and reduces the risk of occult metastasis. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  17. Sensitivity and specificity of the American College of Rheumatology 1987 criteria for the diagnosis of rheumatoid arthritis according to disease duration: a systematic literature review and meta-analysis.

    PubMed

    Banal, F; Dougados, M; Combescure, C; Gossec, L

    2009-07-01

    To evaluate the ability of the widely used ACR set of criteria (both list and tree format) to diagnose RA compared with expert opinion according to disease duration. A systematic literature review was conducted in PubMed and Embase databases. All articles reporting the prevalence of RA according to ACR criteria and expert opinion in cohorts of early (<1 year duration) or established (>1 year) arthritis were analysed to calculate the sensitivity and specificity of ACR 1987 criteria against the "gold standard" (expert opinion). A meta-analysis using a summary receiver operating characteristic (SROC) curve was performed and pooled sensitivity and specificity were calculated with confidence intervals. Of 138 publications initially identified, 19 were analysable (total 7438 patients, 3883 RA). In early arthritis, pooled sensitivity and specificity of the ACR set of criteria were 77% (68% to 84%) and 77% (68% to 84%) in the list format versus 80% (72% to 88%) and 33% (24% to 43%) in the tree format. In established arthritis, sensitivity and specificity were respectively 79% (71% to 85%) and 90% (84% to 94%) versus 80% (71% to 85%) and 93% (86% to 97%). The SROC meta-analysis confirmed the statistically significant differences, suggesting that diagnostic performances of ACR list criteria are better in established arthritis. The specificity of ACR 1987 criteria in early RA is low, and these criteria should not be used as diagnostic tools. Sensitivity and specificity in established RA are higher, which reflects their use as classification criteria gold standard.

  18. Sensitive Amino Acid Composition and Chirality Analysis with the Mars Organic Analyzer (MOA)

    NASA Technical Reports Server (NTRS)

    Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.

    2005-01-01

    Detection of life on Mars requires definition of a suitable biomarker and development of sensitive yet compact instrumentation capable of performing in situ analyses. Our studies are focused on amino acid analysis because amino acids are more resistant to decomposition than other biomolecules, and because amino acid chirality is a well-defined biomarker. Amino acid composition and chirality analysis has been previously demonstrated in the lab using microfabricated capillary electrophoresis (CE) chips. To analyze amino acids in the field, we have developed the Mars Organic Analyzer (MOA), a portable analysis system that consists of a compact instrument and a novel multi-layer CE microchip.

  19. True covariance simulation of the EUVE update filter

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, R. R.

    1989-01-01

    A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.

  20. COBRA ATD minefield detection model initial performance analysis

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.

  1. Evaluation of bone marrow specimens with acute myelogenous leukemia for CD34, CD15, CD117, and myeloperoxidase.

    PubMed

    Dunphy, C H; Polski, J M; Evans, H L; Gardner, L J

    2001-08-01

    Immunophenotyping of bone marrow (BM) specimens with acute myelogenous leukemia (AML) may be performed by flow cytometric (FC) or immunohistochemical (IH) techniques. Some markers (CD34, CD15, and CD117) are available for both techniques. Myeloperoxidase (MPO) analysis may be performed by enzyme cytochemical (EC) or IH techniques. To determine the reliability of these markers and MPO by these techniques, we designed a study to compare the results of analyses of these markers and MPO by FC (CD34, CD15, and CD117), EC (MPO), and IH (CD34, CD15, CD117, and MPO) techniques. Twenty-nine AMLs formed the basis of the study. These AMLs all had been immunophenotyped previously by FC analysis; 27 also had had EC analysis performed. Of the AMLs, 29 had BM core biopsies and 26 had BM clots that could be evaluated. The paraffin blocks of the 29 BM core biopsies and 26 BM clots were stained for CD34, CD117, MPO, and CD15. These results were compared with results by FC analysis (CD34, CD15, and CD117) and EC analysis (MPO). Immunodetection of CD34 expression in AML had a similar sensitivity by FC and IH techniques. Immunodetection of CD15 and CD117 had a higher sensitivity by FC analysis than by IH analysis. Detection of MPO by IH analysis was more sensitive than by EC analysis. There was no correlation of French-American-British (FAB) subtype of AML with CD34 or CD117 expression. Expression of CD15 was associated with AMLs with a monocytic component. Myeloperoxidase reactivity by IH analysis was observed in AMLs originally FAB subtyped as M0. CD34 can be equally detected by FC and IH techniques. CD15 and CD117 are better detected by FC analysis and MPO is better detected by IH analysis.

  2. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  3. Interactive Controls Analysis (INCA)

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.

    1989-01-01

    Version 3.12 of INCA provides user-friendly environment for design and analysis of linear control systems. System configuration and parameters easily adjusted, enabling INCA user to create compensation networks and perform sensitivity analysis in convenient manner. Full complement of graphical routines makes output easy to understand. Written in Pascal and FORTRAN.

  4. Dynamic analysis of Apollo-Salyut/Soyuz docking

    NASA Technical Reports Server (NTRS)

    Schliesing, J. A.

    1972-01-01

    The use of a docking-system computer program in analyzing the dynamic environment produced by two impacting spacecraft and the attitude control systems is discussed. Performance studies were conducted to determine the mechanism load and capture sensitivity to parametric changes in the initial impact conditions. As indicated by the studies, capture latching is most sensitive to vehicle angular-alinement errors and is least sensitive to lateral-miss error. As proved by load-sensitivity studies, peak loads acting on the Apollo spacecraft are considerably lower than the Apollo design-limit loads.

  5. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis.

    PubMed

    Till, Kevin; Jones, Ben L; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification.

  6. Identifying Talent in Youth Sport: A Novel Methodology Using Higher-Dimensional Analysis

    PubMed Central

    Till, Kevin; Jones, Ben L.; Cobley, Stephen; Morley, David; O'Hara, John; Chapman, Chris; Cooke, Carlton; Beggs, Clive B.

    2016-01-01

    Prediction of adult performance from early age talent identification in sport remains difficult. Talent identification research has generally been performed using univariate analysis, which ignores multivariate relationships. To address this issue, this study used a novel higher-dimensional model to orthogonalize multivariate anthropometric and fitness data from junior rugby league players, with the aim of differentiating future career attainment. Anthropometric and fitness data from 257 Under-15 rugby league players was collected. Players were grouped retrospectively according to their future career attainment (i.e., amateur, academy, professional). Players were blindly and randomly divided into an exploratory (n = 165) and validation dataset (n = 92). The exploratory dataset was used to develop and optimize a novel higher-dimensional model, which combined singular value decomposition (SVD) with receiver operating characteristic analysis. Once optimized, the model was tested using the validation dataset. SVD analysis revealed 60 m sprint and agility 505 performance were the most influential characteristics in distinguishing future professional players from amateur and academy players. The exploratory dataset model was able to distinguish between future amateur and professional players with a high degree of accuracy (sensitivity = 85.7%, specificity = 71.1%; p<0.001), although it could not distinguish between future professional and academy players. The validation dataset model was able to distinguish future professionals from the rest with reasonable accuracy (sensitivity = 83.3%, specificity = 63.8%; p = 0.003). Through the use of SVD analysis it was possible to objectively identify criteria to distinguish future career attainment with a sensitivity over 80% using anthropometric and fitness data alone. As such, this suggests that SVD analysis may be a useful analysis tool for research and practice within talent identification. PMID:27224653

  7. Sensitivity of cognitive tests in four cognitive domains in discriminating MDD patients from healthy controls: a meta-analysis.

    PubMed

    Lim, JaeHyoung; Oh, In Kyung; Han, Changsu; Huh, Yu Jeong; Jung, In-Kwa; Patkar, Ashwin A; Steffens, David C; Jang, Bo-Hyoung

    2013-09-01

    We performed a meta-analysis in order to determine which neuropsychological domains and tasks would be most sensitive for discriminating between patients with major depressive disorder (MDD) and healthy controls. Relevant articles were identified through a literature search of the PubMed and Cochrane Library databases for the period between January 1997 and May 2011. A meta-analysis was conducted using the standardized means of individual cognitive tests in each domain. The heterogeneity was assessed, and subgroup analyses according to age and medication status were performed to explore the sources of heterogeneity. A total of 22 trials involving 955 MDD patients and 7,664 healthy participants were selected for our meta-analysis. MDD patients showed significantly impaired results compared with healthy participants on the Digit Span and Continuous Performance Test in the attention domain; the Trail Making Test A (TMT-A) and the Digit Symbol Test in the processing speed domain; the Stroop Test, the Wisconsin Card Sorting Test, and Verbal Fluency in the executive function domain; and immediate verbal memory in the memory domain. The Finger Tapping Task, TMT-B, delayed verbal memory, and immediate and delayed visual memory failed to separate MDD patients from healthy controls. The results of subgroup analysis showed that performance of Verbal Fluency was significantly impaired in younger depressed patients (<60 years), and immediate visual memory was significantly reduced in depressed patients using antidepressants. Our findings have inevitable limitations arising from methodological issues inherent in the meta-analysis and we could not explain high heterogeneity between studies. Despite such limitations, current study has the strength of being the first meta-analysis which tried to specify cognitive function of depressed patients compared with healthy participants. And our findings may provide clinicians with further evidences that some cognitive tests in specific cognitive domains have sensitivity to discriminate MDD patients from healthy controls.

  8. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  9. Impact of solvent conditions on separation and detection of basic drugs by micro liquid chromatography-mass spectrometry under overloading conditions.

    PubMed

    Schubert, Birthe; Oberacher, Herbert

    2011-06-03

    In this study the impact of solvent conditions on the performance of μLC/MS for the analysis of basic drugs was investigated. Our aim was to find experimental conditions that enable high-performance chromatographic separation particularly at overloading conditions paired with a minimal loss of mass spectrometric detection sensitivity. A focus was put on the evaluation of the usability of different kinds of acidic modifiers (acetic acid (HOAc), formic acid (FA), methansulfonic acid (CH₃SO₃H), trifluoroacetic acid (TFA), pentafluoropropionic acid (PFPA), and heptafluorobutyric acid (HFBA)). The test mixture consisted of eleven compounds (bunitrolol, caffeine, cocaine, codeine, diazepam, doxepin, haloperidol, 3,4-methylendioxyamphetamine, morphine, nicotine, and zolpidem). Best chromatographic performance was obtained with the perfluorinated acids. Particularly, 0.010-0.050% HFBA (v/v) was found to represent a good compromise in terms of chromatographic performance and mass spectrometric detection sensitivity. Compared to HOAc, on average a 50% reduction of the peak widths was observed. The use of HFBA was particularly advantageous for polar compounds such as nicotine; only with such a hydrophobic ion-pairing reagent chromatographic retention of nicotine was observed. Best mass spectrometric performance was obtained with HOAc and FA. Loss of detection sensitivity induced by HFBA, however, was moderate and ranged from 0 to 40%, which clearly demonstrates that improved chromatographic performance is able to compensate to a large extent the negative effect of reduced ionization efficiency on detection sensitivity. Applications of μLC/MS for the qualitative and quantitative analysis of clinical and forensic toxicological samples are presented. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Stochastic sensitivity measure for mistuned high-performance turbines

    NASA Technical Reports Server (NTRS)

    Murthy, Durbha V.; Pierre, Christophe

    1992-01-01

    A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.

  11. Space transportation architecture: Reliability sensitivities

    NASA Technical Reports Server (NTRS)

    Williams, A. M.

    1992-01-01

    A sensitivity analysis is given of the benefits and drawbacks associated with a proposed Earth to orbit vehicle architecture. The architecture represents a fleet of six vehicles (two existing, four proposed) that would be responsible for performing various missions as mandated by NASA and the U.S. Air Force. Each vehicle has a prescribed flight rate per year for a period of 31 years. By exposing this fleet of vehicles to a probabilistic environment where the fleet experiences failures, downtimes, setbacks, etc., the analysis involves determining the resiliency and costs associated with the fleet of specific vehicle/subsystem reliabilities. The resources required were actual observed data on the failures and downtimes associated with existing vehicles, data based on engineering judgement for proposed vehicles, and the development of a sensitivity analysis program.

  12. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  13. RuO2 pH Sensor with Super-Glue-Inspired Reference Electrode

    PubMed Central

    Wajrak, Magdalena; Alameh, Kamal

    2017-01-01

    A pH-sensitive RuO2 electrode coated in a commercial cyanoacrylate adhesive typically exhibits very low pH sensitivity, and could be paired with a RuO2 working electrode as a differential type pH sensor. However, such sensors display poor performance in real sample matrices. A pH sensor employing a RuO2 pH-sensitive working electrode and a SiO2-PVB junction-modified RuO2 reference electrode is developed as an alternative high-performance solution. This sensor exhibits a performance similar to that of a commercial glass pH sensor in some common sample matrices, particularly, an excellent pH sensitivity of 55.7 mV/pH, a hysteresis as low as 2.7 mV, and a drift below 2.2 mV/h. The developed sensor structure opens the way towards the development of a simple, cost effective, and robust pH sensor for pH analysis in various sample matrices. PMID:28878182

  14. RuO₂ pH Sensor with Super-Glue-Inspired Reference Electrode.

    PubMed

    Lonsdale, Wade; Wajrak, Magdalena; Alameh, Kamal

    2017-09-06

    A pH-sensitive RuO₂ electrode coated in a commercial cyanoacrylate adhesive typically exhibits very low pH sensitivity, and could be paired with a RuO₂ working electrode as a differential type pH sensor. However, such sensors display poor performance in real sample matrices. A pH sensor employing a RuO₂ pH-sensitive working electrode and a SiO₂-PVB junction-modified RuO₂ reference electrode is developed as an alternative high-performance solution. This sensor exhibits a performance similar to that of a commercial glass pH sensor in some common sample matrices, particularly, an excellent pH sensitivity of 55.7 mV/pH, a hysteresis as low as 2.7 mV, and a drift below 2.2 mV/h. The developed sensor structure opens the way towards the development of a simple, cost effective, and robust pH sensor for pH analysis in various sample matrices.

  15. Revisiting inconsistency in large pharmacogenomic studies

    PubMed Central

    Safikhani, Zhaleh; Smirnov, Petr; Freeman, Mark; El-Hachem, Nehme; She, Adrian; Rene, Quevedo; Goldenberg, Anna; Birkbak, Nicolai J.; Hatzis, Christos; Shi, Leming; Beck, Andrew H.; Aerts, Hugo J.W.L.; Quackenbush, John; Haibe-Kains, Benjamin

    2017-01-01

    In 2013, we published a comparative analysis of mutation and gene expression profiles and drug sensitivity measurements for 15 drugs characterized in the 471 cancer cell lines screened in the Genomics of Drug Sensitivity in Cancer (GDSC) and Cancer Cell Line Encyclopedia (CCLE). While we found good concordance in gene expression profiles, there was substantial inconsistency in the drug responses reported by the GDSC and CCLE projects. We received extensive feedback on the comparisons that we performed. This feedback, along with the release of new data, prompted us to revisit our initial analysis. We present a new analysis using these expanded data, where we address the most significant suggestions for improvements on our published analysis — that targeted therapies and broad cytotoxic drugs should have been treated differently in assessing consistency, that consistency of both molecular profiles and drug sensitivity measurements should be compared across cell lines, and that the software analysis tools provided should have been easier to run, particularly as the GDSC and CCLE released additional data. Our re-analysis supports our previous finding that gene expression data are significantly more consistent than drug sensitivity measurements. Using new statistics to assess data consistency allowed identification of two broad effect drugs and three targeted drugs with moderate to good consistency in drug sensitivity data between GDSC and CCLE. For three other targeted drugs, there were not enough sensitive cell lines to assess the consistency of the pharmacological profiles. We found evidence of inconsistencies in pharmacological phenotypes for the remaining eight drugs. Overall, our findings suggest that the drug sensitivity data in GDSC and CCLE continue to present challenges for robust biomarker discovery. This re-analysis provides additional support for the argument that experimental standardization and validation of pharmacogenomic response will be necessary to advance the broad use of large pharmacogenomic screens. PMID:28928933

  16. 18F-fluorodeoxyglucose positron emission tomography in the diagnosis of malignancy in patients with paraneoplastic neurological syndrome: a systematic review and meta-analysis.

    PubMed

    García Vicente, Ana María; Delgado-Bolton, Roberto C; Amo-Salas, Mariano; López-Fidalgo, Jesús; Caresia Aróztegui, Ana Paula; García Garzón, José Ramón; Orcajo Rincón, Javier; García Velloso, María José; de Arcocha Torres, María; Alvárez Ruíz, Soledad

    2017-08-01

    The detection of occult cancer in patients suspected of having a paraneoplastic neurological syndrome (PNS) poses a diagnostic challenge. The aim of our study was to perform a systematic review and meta-analysis to assess the diagnostic performance of FDG PET for the detection of occult malignant disease responsible for PNS. A systematic review of the literature (MEDLINE, EMBASE, Cochrane, and DARE) was undertaken to identify studies published in any language. The search strategy was structured after addressing clinical questions regarding the validity or usefulness of the test, following the PICO framework. Inclusion criteria were studies involving patients with PNS in whom FDG PET was performed to detect malignancy, and which reported sufficient primary data to allow calculation of diagnostic accuracy parameters. When possible, a meta-analysis was performed to calculate the joint sensitivity, specificity, and detection rate for malignancy (with 95% confidence intervals [CIs]), as well as a subgroup analysis based on patient characteristics (antibodies, syndrome). The comprehensive literature search revealed 700 references. Sixteen studies met the inclusion criteria and were ultimately selected. Most of the studies were retrospective (12/16). For the quality assessment, the QUADAS-2 tool was applied to assess the risk of bias. Across 16 studies (793 patients), the joint sensitivity, specificity, and detection rate for malignancy with FDG PET were 0.87 (95% CI: 0.80-0.93), 0.86 (95% CI: 0.83-0.89), and 14.9% (95% CI: 11.5-18.7), respectively. The area under the curve (AUC) of the summary ROC curve was 0.917. Homogeneity of results was observed for sensitivity but not for specificity. Some of the individual studies showed large 95% CIs as a result of small sample size. The results of our meta-analysis reveal high diagnostic performance of FDG PET in the detection of malignancy responsible for PNS, not affected by the presence of onconeural antibodies or clinical characteristics.

  17. Performance and sensitivity analysis of the generalized likelihood ratio method for failure detection. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bueno, R. A.

    1977-01-01

    Results of the generalized likelihood ratio (GLR) technique for the detection of failures in aircraft application are presented, and its relationship to the properties of the Kalman-Bucy filter is examined. Under the assumption that the system is perfectly modeled, the detectability and distinguishability of four failure types are investigated by means of analysis and simulations. Detection of failures is found satisfactory, but problems in identifying correctly the mode of a failure may arise. These issues are closely examined as well as the sensitivity of GLR to modeling errors. The advantages and disadvantages of this technique are discussed, and various modifications are suggested to reduce its limitations in performance and computational complexity.

  18. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  19. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  20. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  1. Diagnosis of human malignancies using laser-induced breakdown spectroscopy in combination with chemometric methods

    NASA Astrophysics Data System (ADS)

    Chen, Xue; Li, Xiaohui; Yu, Xin; Chen, Deying; Liu, Aichun

    2018-01-01

    Diagnosis of malignancies is a challenging clinical issue. In this work, we present quick and robust diagnosis and discrimination of lymphoma and multiple myeloma (MM) using laser-induced breakdown spectroscopy (LIBS) conducted on human serum samples, in combination with chemometric methods. The serum samples collected from lymphoma and MM cancer patients and healthy controls were deposited on filter papers and ablated with a pulsed 1064 nm Nd:YAG laser. 24 atomic lines of Ca, Na, K, H, O, and N were selected for malignancy diagnosis. Principal component analysis (PCA), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and k nearest neighbors (kNN) classification were applied to build the malignancy diagnosis and discrimination models. The performances of the models were evaluated using 10-fold cross validation. The discrimination accuracy, confusion matrix and receiver operating characteristic (ROC) curves were obtained. The values of area under the ROC curve (AUC), sensitivity and specificity at the cut-points were determined. The kNN model exhibits the best performances with overall discrimination accuracy of 96.0%. Distinct discrimination between malignancies and healthy controls has been achieved with AUC, sensitivity and specificity for healthy controls all approaching 1. For lymphoma, the best discrimination performance values are AUC = 0.990, sensitivity = 0.970 and specificity = 0.956. For MM, the corresponding values are AUC = 0.986, sensitivity = 0.892 and specificity = 0.994. The results show that the serum-LIBS technique can serve as a quick, less invasive and robust method for diagnosis and discrimination of human malignancies.

  2. Thiopurine S-methyltransferase testing for averting drug toxicity: a meta-analysis of diagnostic test accuracy

    PubMed Central

    Zur, RM; Roy, LM; Ito, S; Beyene, J; Carew, C; Ungar, WJ

    2016-01-01

    Thiopurine S-methyltransferase (TPMT) deficiency increases the risk of serious adverse events in persons receiving thiopurines. The objective was to synthesize reported sensitivity and specificity of TPMT phenotyping and genotyping using a latent class hierarchical summary receiver operating characteristic meta-analysis. In 27 studies, pooled sensitivity and specificity of phenotyping for deficient individuals was 75.9% (95% credible interval (CrI), 58.3–87.0%) and 98.9% (96.3–100%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 90.4% (79.1–99.4%) and 100.0% (99.9–100%), respectively. For individuals with deficient or intermediate activity, phenotype sensitivity and specificity was 91.3% (86.4–95.5%) and 92.6% (86.5–96.6%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 88.9% (81.6–97.5%) and 99.2% (98.4–99.9%), respectively. Genotyping has higher sensitivity as long as TPMT*2 and TPMT*3 are tested. Both approaches display high specificity. Latent class meta-analysis is a useful method for synthesizing diagnostic test performance data for clinical practice guidelines. PMID:27217052

  3. The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis

    PubMed Central

    Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.

    2016-01-01

    Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580

  4. Timing of prophylactic surgery in prevention of diverticulitis recurrence: a cost-effectiveness analysis.

    PubMed

    Richards, Robert J; Hammitt, James K

    2002-09-01

    Although surgery is recommended after two or more attacks of uncomplicated diverticulitis, the optimal timing for surgery in terms of cost-effectiveness is unknown. A Markov model was used to compare the costs and outcomes of performing surgery after one, two, or three uncomplicated attacks in 60-year-old hypothetical cohorts. Transition state probabilities were assigned values using published data and expert opinion. Costs were estimated from Medicare reimbursement rates. Surgery after the third attack is cost saving, yielding more years of life and quality adjusted life years at a lower cost than the other two strategies. The results were not sensitive to many of the variables tested in the model or to changes made in the discount rate (0-5%). In conclusion, performing prophylactic resection after the third attack of diverticulitis is cost saving in comparison to resection performed after the first or second attacks and remains cost-effective during sensitivity analysis.

  5. Amplitude analysis of four-body decays using a massively-parallel fitting framework

    NASA Astrophysics Data System (ADS)

    Hasse, C.; Albrecht, J.; Alves, A. A., Jr.; d'Argent, P.; Evans, T. D.; Rademacker, J.; Sokoloff, M. D.

    2017-10-01

    The GooFit Framework is designed to perform maximum-likelihood fits for arbitrary functions on various parallel back ends, for example a GPU. We present an extension to GooFit which adds the functionality to perform time-dependent amplitude analyses of pseudoscalar mesons decaying into four pseudoscalar final states. Benchmarks of this functionality show a significant performance increase when utilizing a GPU compared to a CPU. Furthermore, this extension is employed to study the sensitivity on the {{{D}}}0-{\\bar{{{D}}}}0 mixing parameters x and y in a time-dependent amplitude analysis of the decay D0 → K+π-π+π-. Studying a sample of 50 000 events and setting the central values to the world average of x = (0.49 ± 0.15)% and y = (0.61 ± 0.08)%, the statistical sensitivities of x and y are determined to be σ(x) = 0.019 % and σ(y) = 0.019 %.

  6. Intracellular flow cytometry may be combined with good quality and high sensitivity RT-qPCR analysis.

    PubMed

    Sandstedt, Mikael; Jonsson, Marianne; Asp, Julia; Dellgren, Göran; Lindahl, Anders; Jeppsson, Anders; Sandstedt, Joakim

    2015-12-01

    Flow cytometry (FCM) has become a well-established method for analysis of both intracellular and cell-surface proteins, while quantitative RT-PCR (RT-qPCR) is used to determine gene expression with high sensitivity and specificity. Combining these two methods would be of great value. The effects of intracellular staining on RNA integrity and RT-qPCR sensitivity and quality have not, however, been fully examined. We, therefore, intended to assess these effects further. Cells from the human lung cancer cell line A549 were fixed, permeabilized and sorted by FCM. Sorted cells were analyzed using RT-qPCR. RNA integrity was determined by RNA quality indicator analysis. A549 cells were then mixed with cells of the mouse cardiomyocyte cell line HL-1. A549 cells were identified by the cell surface marker ABCG2, while HL-1 cells were identified by intracellular cTnT. Cells were sorted and analyzed by RT-qPCR. Finally, cell cultures from human atrial biopsies were used to evaluate the effects of fixation and permeabilization on RT-qPCR analysis of nonimmortalized cells stored prior to analysis by FCM. A large amount of RNA could be extracted even when cells had been fixed and permeabilized. Permeabilization resulted in increased RNA degradation and a moderate decrease in RT-qPCR sensitivity. Gene expression levels were also affected to a moderate extent. Sorted populations from the mixed A549 and HL-1 cell samples showed gene expression patterns that corresponded to FCM data. When samples were stored before FCM sorting, the RT-qPCR analysis could still be performed with high sensitivity and quality. In summary, our results show that intracellular FCM may be performed with only minor impairment of the RT-qPCR sensitivity and quality when analyzing sorted cells; however, these effects should be considered when comparing RT-qPCR data of not fixed samples with those of fixed and permeabilized samples. © 2015 International Society for Advancement of Cytometry.

  7. Analysis of thermal performance of penetrated multi-layer insulation

    NASA Technical Reports Server (NTRS)

    Foster, Winfred A., Jr.; Jenkins, Rhonald M.; Yoo, Chai H.; Barrett, William E.

    1988-01-01

    Results of research performed for the purpose of studying the sensitivity of multi-layer insulation blanket performance caused by penetrations through the blanket are presented. The work described in this paper presents the experimental data obtained from thermal vacuum tests of various penetration geometries similar to those present on the Hubble Space Telescope. The data obtained from these tests is presented in terms of electrical power required sensitivity factors referenced to a multi-layer blanket without a penetration. The results of these experiments indicate that a significant increase in electrical power is required to overcome the radiation heat losses in the vicinity of the penetrations.

  8. Biomarkers of gluten sensitivity in patients with non-affective psychosis: a meta-analysis.

    PubMed

    Lachance, Laura R; McKenzie, Kwame

    2014-02-01

    Dohan first proposed that there may be an association between gluten sensitivity and schizophrenia in the 1950s. Since then, this association has been measured using several different serum biomarkers of gluten sensitivity. At this point, it is unclear which serum biomarkers of gluten sensitivity are elevated in patients with schizophrenia. However, evidence suggests that the immune response in this group is different from the immune response to gluten found in patients with Celiac disease. A systematic literature review was performed to identify all original articles that measured biomarkers of gluten sensitivity in patients with schizophrenia and non-affective psychoses compared to a control group. Three databases were used: Ovid MEDLINE, Psych INFO, and Embase, dating back to 1946. Forward tracking and backward tracking were undertaken on retrieved papers. A meta-analysis was performed of specific biomarkers and reported according to MOOSE guidelines. 17 relevant original articles were identified, and 12 met criteria for the meta-analysis. Five biomarkers of gluten sensitivity were found to be significantly elevated in patients with non-affective psychoses compared to controls. The pooled odds ratio and 95% confidence intervals were Anti-Gliadin IgG OR=2.31 [1.16, 4.58], Anti-Gliadin IgA OR=2.57 [1.13, 5.82], Anti-TTG2 IgA OR=5.86 [2.88, 11.95], Anti-Gliadin (unspecified isotype) OR=7.68 [2.07, 28.42], and Anti-Wheat OR=2.74 [1.06, 7.08]. Four biomarkers for gluten sensitivity, Anti-EMA IgA, Anti-TTG2 IgG, Anti-DGP IgG, and Anti-Gluten were not found to be associated with schizophrenia. Not all serum biomarkers of gluten sensitivity are elevated in patients with schizophrenia. However, the specific immune response to gluten in this population differs from that found in patients with Celiac disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models

    USGS Publications Warehouse

    Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.

    2014-01-01

    This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.

  10. Diagnostic accuracy of magnetic resonance imaging techniques for treatment response evaluation in patients with high-grade glioma, a systematic review and meta-analysis.

    PubMed

    van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk

    2017-10-01

    Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.

  11. Study of water based nanofluid flows in annular tubes using numerical simulation and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Siadaty, Moein; Kazazi, Mohsen

    2018-04-01

    Convective heat transfer, entropy generation and pressure drop of two water based nanofluids (Cu-water and Al2O3-water) in horizontal annular tubes are scrutinized by means of computational fluids dynamics, response surface methodology and sensitivity analysis. First, central composite design is used to perform a series of experiments with diameter ratio, length to diameter ratio, Reynolds number and solid volume fraction. Then, CFD is used to calculate the Nusselt Number, Euler number and entropy generation. After that, RSM is applied to fit second order polynomials on responses. Finally, sensitivity analysis is conducted to manage the above mentioned parameters inside tube. Totally, 62 different cases are examined. CFD results show that Cu-water and Al2O3-water have the highest and lowest heat transfer rate, respectively. In addition, analysis of variances indicates that increase in solid volume fraction increases dimensionless pressure drop for Al2O3-water. Moreover, it has a significant negative and insignificant effects on Cu-water Nusselt and Euler numbers, respectively. Analysis of Bejan number indicates that frictional and thermal entropy generations are the dominant irreversibility in Al2O3-water and Cu-water flows, respectively. Sensitivity analysis indicates dimensionless pressure drop sensitivity to tube length for Cu-water is independent of its diameter ratio at different Reynolds numbers.

  12. Human salivary glucose analysis by high-performance ion-exchange chromatography and pulsed amperometric detection.

    PubMed

    Gough, H; Luke, G A; Beeley, J A; Geddes, D A

    1996-02-01

    The aim of this project was to develop an analytical procedure with the required level of sensitivity for the determination of glucose concentrations in small volumes of unstimulated fasting whole saliva. The technique involves high-performance ion-exchange chromatography at high pH and pulsed amperometric detection. It has a high level of reproducibility, a sensitivity as low as 0.1 mumol/l and requires only 50 microliters samples (sensitivity = 0.002 pmol). Inhibition of glucose metabolism, by procedures such as collection into 0.1% (w/v) sodium fluoride, was shown to be essential if accurate results are to be obtained. Collection on to ice followed by storage at -20 degrees C was shown to be unsuitable and resulted in glucose loss by degradation. There were inter- and intraindividual variations in the glucose concentration in unstimulated mixed saliva (range; 0.02-0.4 mmol/l). The procedure can be used for the analysis of other salivary carbohydrates and for monitoring the clearance of dietary carbohydrates from the mouth.

  13. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance.

    PubMed

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance

    NASA Astrophysics Data System (ADS)

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J.

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.

  15. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  16. Comparison of loop-mediated isothermal amplification assay and smear microscopy with culture for the diagnostic accuracy of tuberculosis.

    PubMed

    Gelaw, Baye; Shiferaw, Yitayal; Alemayehu, Marta; Bashaw, Abate Assefa

    2017-01-17

    Tuberculosis (TB) caused by Mycobacterium tuberculosis is one of the leading causes of death from infectious diseases worldwide. Sputum smear microscopy remains the most widely available pulmonary TB diagnostic tool particularly in resource limited settings. A highly sensitive diagnostic with minimal infrastructure, cost and training is required. Hence, we assessed the diagnostic performance of Loop-mediated isothermal amplification (LAMP) assay in detecting M.tuberculosis infection in sputum sample compared to LED fluorescent smear microscopy and culture. A cross-sectional study was conducted at the University of Gondar Hospital from June 01, 2015 to August 30, 2015. Pulmonary TB diagnosis using sputum LED fluorescence smear microscopy, TB-LAMP assay and culture were done. A descriptive analysis was used to determine demographic characteristics of the study participants. Analysis of sensitivity and specificity for smear microscopy and TB-LAMP compared with culture as a reference test was performed. Cohen's kappa was calculated as a measure of agreement between the tests. A total of 78 pulmonary presumptive TB patients sputum sample were analyzed. The overall sensitivity and specificity of LAMP were 75 and 98%, respectively. Among smear negative sputum samples, 33.3% sensitivity and 100% specificity of LAMP were observed. Smear microscopy showed 78.6% sensitivity and 98% specificity. LAMP and smear in series had sensitivity of 67.8% and specificity of 100%. LAMP and smear in parallel had sensitivity of 85.7% and specificity of 96%. The agreement between LAMP and fluorescent smear microscopy tests was very good (κ = 0.83, P-value ≤0.0001). TB-LAMP showed similar specificity but a slightly lower sensitivity with LED fluorescence microscopy. The specificity of LAMP and smear microscopy in series was high. The sensitivity of LAMP was insufficient for smear negative sputum samples.

  17. Evaluation of Aspergillus PCR protocols for testing serum specimens.

    PubMed

    White, P Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J G; McCulloch, Elaine; Barnes, Rosemary A; Donnelly, J Peter; Loeffler, Juergen

    2011-11-01

    A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance.

  18. Evaluation of Aspergillus PCR Protocols for Testing Serum Specimens▿†

    PubMed Central

    White, P. Lewis; Mengoli, Carlo; Bretagne, Stéphane; Cuenca-Estrella, Manuel; Finnstrom, Niklas; Klingspor, Lena; Melchers, Willem J. G.; McCulloch, Elaine; Barnes, Rosemary A.; Donnelly, J. Peter; Loeffler, Juergen

    2011-01-01

    A panel of human serum samples spiked with various amounts of Aspergillus fumigatus genomic DNA was distributed to 23 centers within the European Aspergillus PCR Initiative to determine analytical performance of PCR. Information regarding specific methodological components and PCR performance was requested. The information provided was made anonymous, and meta-regression analysis was performed to determine any procedural factors that significantly altered PCR performance. Ninety-seven percent of protocols were able to detect a threshold of 10 genomes/ml on at least one occasion, with 83% of protocols reproducibly detecting this concentration. Sensitivity and specificity were 86.1% and 93.6%, respectively. Positive associations between sensitivity and the use of larger sample volumes, an internal control PCR, and PCR targeting the internal transcribed spacer (ITS) region were shown. Negative associations between sensitivity and the use of larger elution volumes (≥100 μl) and PCR targeting the mitochondrial genes were demonstrated. Most Aspergillus PCR protocols used to test serum generate satisfactory analytical performance. Testing serum requires less standardization, and the specific recommendations shown in this article will only improve performance. PMID:21940479

  19. Computing sensitivity and selectivity in parallel factor analysis and related multiway techniques: the need for further developments in net analyte signal theory.

    PubMed

    Olivieri, Alejandro C

    2005-08-01

    Sensitivity and selectivity are important figures of merit in multiway analysis, regularly employed for comparison of the analytical performance of methods and for experimental design and planning. They are especially interesting in the second-order advantage scenario, where the latter property allows for the analysis of samples with a complex background, permitting analyte determination even in the presence of unsuspected interferences. Since no general theory exists for estimating the multiway sensitivity, Monte Carlo numerical calculations have been developed for estimating variance inflation factors, as a convenient way of assessing both sensitivity and selectivity parameters for the popular parallel factor (PARAFAC) analysis and also for related multiway techniques. When the second-order advantage is achieved, the existing expressions derived from net analyte signal theory are only able to adequately cover cases where a single analyte is calibrated using second-order instrumental data. However, they fail for certain multianalyte cases, or when third-order data are employed, calling for an extension of net analyte theory. The results have strong implications in the planning of multiway analytical experiments.

  20. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  1. Prediction of sensitivity to warfarin based on VKORC1 and CYP2C9 polymorphisms in patients from different places in Colombia.

    PubMed

    Cifuentes, Ricardo A; Murillo-Rojas, Juan; Avella-Vargas, Esperanza

    2016-03-03

    In the search to prevent hemorrhages associated with anticoagulant therapy, a major goal is to validate predictors of sensitivity to warfarin. However, previous studies in Colombia that included polymorphisms in the VKORC1 and CYP2C9 genes as predictors reported different algorithm performances to explain dose variations, and did not evaluate the prediction of sensitivity to warfarin.  To determine the accuracy of the pharmacogenetic analysis, which includes the CYP2C9 *2 and *3 and VKORC1 1639G>A polymorphisms in predicting patients' sensitivity to warfarin at the Hospital Militar Central, a reference center for patients born in different parts of Colombia.  Demographic and clinical data were obtained from 130 patients with stable doses of warfarin for more than two months. Next, their genotypes were obtained through a melting curve analysis. After verifying the Hardy-Weinberg equilibrium of the genotypes from the polymorphisms, a statistical analysis was done, which included multivariate and predictive approaches.  A pharmacogenetic model that explained 52.8% of dose variation (p<0.001) was built, which was only 4% above the performance resulting from the same data using the International Warfarin Pharmacogenetics Consortium algorithm. The model predicting the sensitivity achieved an accuracy of 77.8% and included age (p=0.003), polymorphisms *2 and *3 (p=0.002) and polymorphism 1639G>A (p<0.001) as predictors.  These results in a mixed population support the prediction of sensitivity to warfarin based on polymorphisms in VKORC1 and CYP2C9 as a valid approach in Colombian patients.

  2. Geostationary Coastal and Air Pollution Events (GEO-CAPE) Sensitivity Analysis Experiment

    NASA Technical Reports Server (NTRS)

    Lee, Meemong; Bowman, Kevin

    2014-01-01

    Geostationary Coastal and Air pollution Events (GEO-CAPE) is a NASA decadal survey mission to be designed to provide surface reflectance at high spectral, spatial, and temporal resolutions from a geostationary orbit necessary for studying regional-scale air quality issues and their impact on global atmospheric composition processes. GEO-CAPE's Atmospheric Science Questions explore the influence of both gases and particles on air quality, atmospheric composition, and climate. The objective of the GEO-CAPE Observing System Simulation Experiment (OSSE) is to analyze the sensitivity of ozone to the global and regional NOx emissions and improve the science impact of GEO-CAPE with respect to the global air quality. The GEO-CAPE OSSE team at Jet propulsion Laboratory has developed a comprehensive OSSE framework that can perform adjoint-sensitivity analysis for a wide range of observation scenarios and measurement qualities. This report discusses the OSSE framework and presents the sensitivity analysis results obtained from the GEO-CAPE OSSE framework for seven observation scenarios and three instrument systems.

  3. Development of patient collation system by kinetic analysis for chest dynamic radiogram with flat panel detector

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Yuichiro; Kodera, Yoshie

    2006-03-01

    In the picture archiving and communication system (PACS) environment, it is important that all images be stored in the correct location. However, if information such as the patient's name or identification number has been entered incorrectly, it is difficult to notice the error. The present study was performed to develop a system of patient collation automatically for dynamic radiogram examination by a kinetic analysis, and to evaluate the performance of the system. Dynamic chest radiographs during respiration were obtained by using a modified flat panel detector system. Our computer algorithm developed in this study was consisted of two main procedures, kinetic map imaging processing, and collation processing. Kinetic map processing is a new algorithm to visualize a movement for dynamic radiography; direction classification of optical flows and intensity-density transformation technique was performed. Collation processing consisted of analysis with an artificial neural network (ANN) and discrimination for Mahalanobis' generalized distance, those procedures were performed to evaluate a similarity of combination for the same person. Finally, we investigated the performance of our system using eight healthy volunteers' radiographs. The performance was shown as a sensitivity and specificity. The sensitivity and specificity for our system were shown 100% and 100%, respectively. This result indicated that our system has excellent performance for recognition of a patient. Our system will be useful in PACS management for dynamic chest radiography.

  4. Bronchial and non-bronchial systemic arteries: value of multidetector CT angiography in diagnosis and angiographic embolisation feasibility analysis.

    PubMed

    Lin, Yuning; Chen, Ziqian; Yang, Xizhang; Zhong, Qun; Zhang, Hongwen; Yang, Li; Xu, Shangwen; Li, Hui

    2013-12-01

    The aim of this study is to evaluate the diagnostic performance of multidetector CT angiography (CTA) in depicting bronchial and non-bronchial systemic arteries in patients with haemoptysis and to assess whether this modality helps determine the feasibility of angiographic embolisation. Fifty-two patients with haemoptysis between January 2010 and July 2011 underwent both preoperative multidetector CTA and digital subtraction angiography (DSA) imaging. Diagnostic performance of CTA in depicting arteries causing haemoptysis was assessed on a per-patient and a per-artery basis. The feasibility of the endovascular treatment evaluated by CTA was analysed. Sensitivity, specificity, and positive and negative predictive values for those analyses were determined. Fifty patients were included in the artery-presence-number analysis. In the per-patient analysis, neither CTA (P = 0.25) nor DSA (P = 1.00) showed statistical difference in the detection of arteries causing haemoptysis. The sensitivity, specificity, and positive and negative predictive values were 94%, 100%, 100%, and 40%, respectively, for the presence of pathologic arteries evaluated by CTA, and 98%, 100%, 100%, and 67%, respectively, for DSA. On the per-artery basis, CTA correctly identified 97% (107/110). Fifty-two patients were included in the feasibility analysis. The performance of CTA in predicting the feasibility of angiographic embolisation was not statistically different from the treatment performed (P = 1.00). The sensitivity, specificity, and positive and negative predictive values were 96%, 80%, 98% and 67%, respectively, for CTA. Multidetector CTA is an accurate imaging method in depicting the presence and number of arteries causing haemoptysis. This modality is also useful for determining the feasibility of angiographic embolisation for haemoptysis. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  5. Reducing the overlay metrology sensitivity to perturbations of the measurement stack

    NASA Astrophysics Data System (ADS)

    Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen

    2017-03-01

    Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.

  6. Comprehensive analysis of transport aircraft flight performance

    NASA Astrophysics Data System (ADS)

    Filippone, Antonio

    2008-04-01

    This paper reviews the state-of-the art in comprehensive performance codes for fixed-wing aircraft. The importance of system analysis in flight performance is discussed. The paper highlights the role of aerodynamics, propulsion, flight mechanics, aeroacoustics, flight operation, numerical optimisation, stochastic methods and numerical analysis. The latter discipline is used to investigate the sensitivities of the sub-systems to uncertainties in critical state parameters or functional parameters. The paper discusses critically the data used for performance analysis, and the areas where progress is required. Comprehensive analysis codes can be used for mission fuel planning, envelope exploration, competition analysis, a wide variety of environmental studies, marketing analysis, aircraft certification and conceptual aircraft design. A comprehensive program that uses the multi-disciplinary approach for transport aircraft is presented. The model includes a geometry deck, a separate engine input deck with the main parameters, a database of engine performance from an independent simulation, and an operational deck. The comprehensive code has modules for deriving the geometry from bitmap files, an aerodynamics model for all flight conditions, a flight mechanics model for flight envelopes and mission analysis, an aircraft noise model and engine emissions. The model is validated at different levels. Validation of the aerodynamic model is done against the scale models DLR-F4 and F6. A general model analysis and flight envelope exploration are shown for the Boeing B-777-300 with GE-90 turbofan engines with intermediate passenger capacity (394 passengers in 2 classes). Validation of the flight model is done by sensitivity analysis on the wetted area (or profile drag), on the specific air range, the brake-release gross weight and the aircraft noise. A variety of results is shown, including specific air range charts, take-off weight-altitude charts, payload-range performance, atmospheric effects, economic Mach number and noise trajectories at F.A.R. landing points.

  7. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. EVALUATION OF THE COMMUNITY MULTISCALE AIR QUALITY (CMAQ) MODEL VERSION 4.5: UNCERTAINTIES AND SENSITIVITIES IMPACTING MODEL PERFORMANCE: PART II - PARTICULATE MATTER

    EPA Science Inventory

    This paper presents an analysis of the CMAQ v4.5 model performance for particulate matter and its chemical components for the simulated year 2001. This is part two is two part series of papers that examines the model performance of CMAQ v4.5.

  9. Analysis of Fluorotelomer Alcohols in Soils: Optimization of Extraction and Chromatography

    EPA Science Inventory

    This article describes the development of an analytical method for the determination of fluorotelomer alcohols (FTOHs) in soil. The sensitive and selective determination of the telomer alcohols was performed by extraction with mthyl tert-butyl ether (MTBE) and analysis of the ext...

  10. Measuring Road Network Vulnerability with Sensitivity Analysis

    PubMed Central

    Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin

    2017-01-01

    This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706

  11. Motif-based analysis of large nucleotide data sets using MEME-ChIP

    PubMed Central

    Ma, Wenxiu; Noble, William S; Bailey, Timothy L

    2014-01-01

    MEME-ChIP is a web-based tool for analyzing motifs in large DNA or RNA data sets. It can analyze peak regions identified by ChIP-seq, cross-linking sites identified by cLIP-seq and related assays, as well as sets of genomic regions selected using other criteria. MEME-ChIP performs de novo motif discovery, motif enrichment analysis, motif location analysis and motif clustering, providing a comprehensive picture of the DNA or RNA motifs that are enriched in the input sequences. MEME-ChIP performs two complementary types of de novo motif discovery: weight matrix–based discovery for high accuracy; and word-based discovery for high sensitivity. Motif enrichment analysis using DNA or RNA motifs from human, mouse, worm, fly and other model organisms provides even greater sensitivity. MEME-ChIP’s interactive HTML output groups and aligns significant motifs to ease interpretation. this protocol takes less than 3 h, and it provides motif discovery approaches that are distinct and complementary to other online methods. PMID:24853928

  12. A comparative study of biomass integrated gasification combined cycle power systems: Performance analysis.

    PubMed

    Zang, Guiyan; Tejasvi, Sharma; Ratner, Albert; Lora, Electo Silva

    2018-05-01

    The Biomass Integrated Gasification Combined Cycle (BIGCC) power system is believed to potentially be a highly efficient way to utilize biomass to generate power. However, there is no comparative study of BIGCC systems that examines all the latest improvements for gasification agents, gas turbine combustion methods, and CO 2 Capture and Storage options. This study examines the impact of recent advancements on BIGCC performance through exergy analysis using Aspen Plus. Results show that the exergy efficiency of these systems is ranged from 22.3% to 37.1%. Furthermore, exergy analysis indicates that the gas turbine with external combustion has relatively high exergy efficiency, and Selexol CO 2 removal method has low exergy destruction. Moreover, the sensitivity analysis shows that the system exergy efficiency is more sensitive to the initial temperature and pressure ratio of the gas turbine, whereas has a relatively weak dependence on the initial temperature and initial pressure of the steam turbine. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. On the sensitivity of complex, internally coupled systems

    NASA Technical Reports Server (NTRS)

    Sobieszczanskisobieski, Jaroslaw

    1988-01-01

    A method is presented for computing sensitivity derivatives with respect to independent (input) variables for complex, internally coupled systems, while avoiding the cost and inaccuracy of finite differencing performed on the entire system analysis. The method entails two alternative algorithms: the first is based on the classical implicit function theorem formulated on residuals of governing equations, and the second develops the system sensitivity equations in a new form using the partial (local) sensitivity derivatives of the output with respect to the input of each part of the system. A few application examples are presented to illustrate the discussion.

  14. Clinical Evaluation of a Loop-Mediated Amplification Kit for Diagnosis of Imported Malaria

    PubMed Central

    Polley, Spencer D.; González, Iveth J.; Mohamed, Deqa; Daly, Rosemarie; Bowers, Kathy; Watson, Julie; Mewse, Emma; Armstrong, Margaret; Gray, Christen; Perkins, Mark D.; Bell, David; Kanda, Hidetoshi; Tomita, Norihiro; Kubota, Yutaka; Mori, Yasuyoshi; Chiodini, Peter L.; Sutherland, Colin J.

    2013-01-01

    Background. Diagnosis of malaria relies on parasite detection by microscopy or antigen detection; both fail to detect low-density infections. New tests providing rapid, sensitive diagnosis with minimal need for training would enhance both malaria diagnosis and malaria control activities. We determined the diagnostic accuracy of a new loop-mediated amplification (LAMP) kit in febrile returned travelers. Methods. The kit was evaluated in sequential blood samples from returned travelers sent for pathogen testing to a specialist parasitology laboratory. Microscopy was performed, and then malaria LAMP was performed using Plasmodium genus and Plasmodium falciparum–specific tests in parallel. Nested polymerase chain reaction (PCR) was performed on all samples as the reference standard. Primary outcome measures for diagnostic accuracy were sensitivity and specificity of LAMP results, compared with those of nested PCR. Results. A total of 705 samples were tested in the primary analysis. Sensitivity and specificity were 98.4% and 98.1%, respectively, for the LAMP P. falciparum primers and 97.0% and 99.2%, respectively, for the Plasmodium genus primers. Post hoc repeat PCR analysis of all 15 tests with discrepant results resolved 4 results in favor of LAMP, suggesting that the primary analysis had underestimated diagnostic accuracy. Conclusions. Malaria LAMP had a diagnostic accuracy similar to that of nested PCR, with a greatly reduced time to result, and was superior to expert microscopy. PMID:23633403

  15. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  16. The enhanced cyan fluorescent protein: a sensitive pH sensor for fluorescence lifetime imaging.

    PubMed

    Poëa-Guyon, Sandrine; Pasquier, Hélène; Mérola, Fabienne; Morel, Nicolas; Erard, Marie

    2013-05-01

    pH is an important parameter that affects many functions of live cells, from protein structure or function to several crucial steps of their metabolism. Genetically encoded pH sensors based on pH-sensitive fluorescent proteins have been developed and used to monitor the pH of intracellular compartments. The quantitative analysis of pH variations can be performed either by ratiometric or fluorescence lifetime detection. However, most available genetically encoded pH sensors are based on green and yellow fluorescent proteins and are not compatible with multicolor approaches. Taking advantage of the strong pH sensitivity of enhanced cyan fluorescent protein (ECFP), we demonstrate here its suitability as a sensitive pH sensor using fluorescence lifetime imaging. The intracellular ECFP lifetime undergoes large changes (32 %) in the pH 5 to pH 7 range, which allows accurate pH measurements to better than 0.2 pH units. By fusion of ECFP with the granular chromogranin A, we successfully measured the pH in secretory granules of PC12 cells, and we performed a kinetic analysis of intragranular pH variations in living cells exposed to ammonium chloride.

  17. Sensitivity Analysis of Hybrid Propulsion Transportation System for Human Mars Expeditions

    NASA Technical Reports Server (NTRS)

    Chai, Patrick R.; Joyce, Ryan T.; Kessler, Paul D.; Merrill, Raymond G.; Qu, Min

    2017-01-01

    The National Aeronautics and Space Administration continues to develop and refine various transportation options to successfully field a human Mars campaign. One of these transportation options is the Hybrid Transportation System which utilizes both solar electric propulsion and chemical propulsion. The Hybrid propulsion system utilizes chemical propulsion to perform high thrust maneuvers, where the delta-V is most optimal when ap- plied to save time and to leverage the Oberth effect. It then utilizes solar electric propulsion to augment the chemical burns throughout the interplanetary trajectory. This eliminates the need for the development of two separate vehicles for crew and cargo missions. Previous studies considered single point designs of the architecture, with fixed payload mass and propulsion system performance parameters. As the architecture matures, it is inevitable that the payload mass and the performance of the propulsion system will change. It is desirable to understand how these changes will impact the in-space transportation system's mass and power requirements. This study presents an in-depth sensitivity analysis of the Hybrid crew transportation system to payload mass growth and solar electric propulsion performance. This analysis is used to identify the breakpoints of the current architecture and to inform future architecture and campaign design decisions.

  18. Tomosynthesis for the early detection of pulmonary emphysema: diagnostic performance compared with chest radiography, using multidetector computed tomography as reference.

    PubMed

    Yamada, Yoshitake; Jinzaki, Masahiro; Hashimoto, Masahiro; Shiomi, Eisuke; Abe, Takayuki; Kuribayashi, Sachio; Ogawa, Kenji

    2013-08-01

    To compare the diagnostic performance of tomosynthesis with that of chest radiography for the detection of pulmonary emphysema, using multidetector computed tomography (MDCT) as reference. Forty-eight patients with and 63 without pulmonary emphysema underwent chest MDCT, tomosynthesis and radiography on the same day. Two blinded radiologists independently evaluated the tomosynthesis images and radiographs for the presence of pulmonary emphysema. Axial and coronal MDCT images served as the reference standard and the percentage lung volume with attenuation values of -950 HU or lower (LAA-950) was evaluated to determine the extent of emphysema. Receiver-operating characteristic (ROC) analysis and generalised estimating equations model were used. ROC analysis revealed significantly better performance (P < 0.0001) of tomosynthesis than radiography for the detection of pulmonary emphysema. The average sensitivity, specificity, positive predictive value and negative predictive value of tomosynthesis were 0.875, 0.968, 0.955 and 0.910, respectively, whereas the values for radiography were 0.479, 0.913, 0.815 and 0.697, respectively. For both tomosynthesis and radiography, the sensitivity increased with increasing LAA-950. The diagnostic performance of tomosynthesis was significantly superior to that of radiography for the detection of pulmonary emphysema. In both tomosynthesis and radiography, the sensitivity was affected by the LAA-950. • Tomosynthesis showed significantly better diagnostic performance for pulmonary emphysema than radiography. • Interobserver agreement for tomosynthesis was significantly higher than that for radiography. • Sensitivity increased with increasing LAA -950 in both tomosynthesis and radiography. • Tomosynthesis imparts a similar radiation dose to two projection chest radiography. • Radiation dose and cost of tomosynthesis are lower than those of MDCT.

  19. Nuclear Data Needs for the Neutronic Design of MYRRHA Fast Spectrum Research Reactor

    NASA Astrophysics Data System (ADS)

    Stankovskiy, A.; Malambu, E.; Van den Eynde, G.; Díez, C. J.

    2014-04-01

    A global sensitivity analysis of effective neutron multiplication factor to the change of nuclear data library has been performed. It revealed that the test version of JEFF-3.2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than JEFF-3.1.2 does. The analysis of contributions of individual evaluations into keff sensitivity resulted in the priority list of nuclides, uncertainties on cross sections and fission neutron multiplicities of which have to be improved by setting up dedicated differential and integral experiments.

  20. Nursing students' understanding of factors influencing ethical sensitivity: A qualitative study.

    PubMed

    Borhani, Fariba; Abbaszadeh, Abbas; Mohsenpour, Mohaddeseh

    2013-07-01

    Ethical sensitivity is considered as a component of professional competency of nurses. Its effects on improvement of nurses' ethical performance and the therapeutic relationship between nurses and patients have been reported. However, very limited studies have evaluated ethical sensitivity. Since no previous Iranian research has been conducted in this regard, the present study aimed to review nursing students' understanding of effective factors on ethical sensitivity. This qualitative study was performed in Kerman, Iran, during 2009. It used semi-structured individual interviews with eight MSc nursing students to assess their viewpoints. It also included two focus groups. Purposive sampling was continued until data saturation. Data were analyzed using manifest content analysis. The students' understanding of factors influencing ethical sensitivity were summarized in five main themes including individual and spiritual characteristics, education, mutual understanding, internal and external controls, and experience of an immoral act. The findings of this study create a unique framework for sensitization of nurses in professional performance. The application of these factors in human resource management is reinforcement of positive aspects and decrease in negative aspects, in education can use for educational objectives setting, and in research can designing studies based on this framework and making related tools. It is noteworthy that presented classification was influenced by students themselves and mentioned to a kind of learning activity by them.

  1. Accuracy of mucocutaneous leishmaniasis diagnosis using polymerase chain reaction: systematic literature review and meta-analysis

    PubMed Central

    Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro

    2015-01-01

    The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238

  2. Development of techniques for the analysis of isoflavones in soy foods and nutraceuticals.

    PubMed

    Dentith, Susan; Lockwood, Brian

    2008-05-01

    For over 20 years, soy isoflavones have been investigated for their ability to prevent a wide range of cancers and cardiovascular problems, and numerous other disease states. This research is underpinned by the ability of researchers to analyse isoflavones in various forms in a range of raw materials and biological fluids. This review summarizes the techniques recently used in their analysis. The speed of high performance liquid chromatography analysis has been improved, allowing analysis of more samples, and increasing the sensitivity of detection techniques allows quantification of isoflavones down to nanomoles per litre levels in biological fluids. The combination of high-performance liquid chromatography with immunoassay has allowed identification and estimation of low-level soy isoflavones. The use of soy isoflavone supplements has shown an increase in their circulating levels in plasma and urine, aiding investigation of their biological effects. The significance of the metabolite equol has spurned research into new areas, and recently the specific enantiomers have been studied. High-performance liquid chromatography, capillary electrophoresis and gas chromatography are widely used with a range of detection systems. Increasingly, immunoassay is being used because of its high sensitivity and low cost.

  3. Performance of an automated electronic acute lung injury screening system in intensive care unit patients.

    PubMed

    Koenig, Helen C; Finkel, Barbara B; Khalsa, Satjeet S; Lanken, Paul N; Prasad, Meeta; Urbani, Richard; Fuchs, Barry D

    2011-01-01

    Lung protective ventilation reduces mortality in patients with acute lung injury, but underrecognition of acute lung injury has limited its use. We recently validated an automated electronic acute lung injury surveillance system in patients with major trauma in a single intensive care unit. In this study, we assessed the system's performance as a prospective acute lung injury screening tool in a diverse population of intensive care unit patients. Patients were screened prospectively for acute lung injury over 21 wks by the automated system and by an experienced research coordinator who manually screened subjects for enrollment in Acute Respiratory Distress Syndrome Clinical Trials Network (ARDSNet) trials. Performance of the automated system was assessed by comparing its results with the manual screening process. Discordant results were adjudicated blindly by two physician reviewers. In addition, a sensitivity analysis using a range of assumptions was conducted to better estimate the system's performance. The Hospital of the University of Pennsylvania, an academic medical center and ARDSNet center (1994-2006). Intubated patients in medical and surgical intensive care units. None. Of 1270 patients screened, 84 were identified with acute lung injury (incidence of 6.6%). The automated screening system had a sensitivity of 97.6% (95% confidence interval, 96.8-98.4%) and a specificity of 97.6% (95% confidence interval, 96.8-98.4%). The manual screening algorithm had a sensitivity of 57.1% (95% confidence interval, 54.5-59.8%) and a specificity of 99.7% (95% confidence interval, 99.4-100%). Sensitivity analysis demonstrated a range for sensitivity of 75.0-97.6% of the automated system under varying assumptions. Under all assumptions, the automated system demonstrated higher sensitivity than and comparable specificity to the manual screening method. An automated electronic system identified patients with acute lung injury with high sensitivity and specificity in diverse intensive care units of a large academic medical center. Further studies are needed to evaluate the effect of automated prompts that such a system can initiate on the use of lung protective ventilation in patients with acute lung injury.

  4. Evaluation of the GeneXpert MTB/RIF assay on extrapulmonary and respiratory samples other than sputum: a low burden country experience.

    PubMed

    Pandey, Sushil; Congdon, Jacob; McInnes, Bradley; Pop, Alina; Coulter, Christopher

    2017-01-01

    The aim of this study was to assess the performance of the GeneXpert MTB/RIF assay on extrapulmonary (EP) and respiratory (non-sputum) clinical samples of patients suspected of having tuberculosis (TB) from Queensland, Australia. A total of 269 EP and respiratory (non-sputum) clinical samples collected from Qld patients who were suspected of having TB were subjected to the GeneXpert MTB/RIF analysis, Ziehl-Neelsen (ZN) staining, Mycobacterium tuberculosis (MTB) culture and drug susceptibility testing. Phenotypic and genotypic data were compared. The overall performance analysis of the GeneXpert MTB/RIF assay for detection of MTB complex demonstrated sensitivity of 89%, specificity of 95%, PPV of 89% and NPV of 95% using culture as a reference standard. The GeneXpert MTB/RIF analysis of acid-fast bacilli (AFB) smear positive samples and AFB smear negative samples showed sensitivities of 100% and 77%, respectively. Looking at individual EP and respiratory (non-sputum) sample types, the sensitivity ranged from 60% to 100% although the specificity ranged from 33% to 100% with the specificity of lymph node tissue biopsy being the lowest. The GeneXpert MTB/RIF assay detected 11% more TB cases than culture and 27% more cases than ZN microscopy. Due to insufficient numbers of presenting rifampicin resistance cases, performance analysis of the GeneXpert MTB/RIF assay on rifampicin resistance could not be carried out. The GeneXpert MTB/RIF assay is potentially valuable for TB diagnosis in the majority of the EP and respiratory (other than sputum) samples in our setting. Although the GeneXpert MTB/RIF assay provides rapid diagnostic results, the overall sensitivity to rule out the disease is suboptimal for some specimen types. Performance varied according to specimen type and AFB smear status. The sensitivity and specificity of lymph node tissue was 63% and 33%. Care must be taken when using the GeneXpert MTB/RIF assay for detection of MTB in lymph node tissue samples. All samples should be cultured regardless of the GeneXpert MTB/RIF assay result. Copyright © 2016 Royal College of Pathologists of Australasia. Published by Elsevier B.V. All rights reserved.

  5. Characterization of iron-doped lithium niobate for holographic storage applications

    NASA Technical Reports Server (NTRS)

    Shah, R. R.; Kim, D. M.; Rabson, T. A.; Tittel, F. K.

    1976-01-01

    A comprehensive characterization of chemical and holographic properties of eight systematically chosen Fe:LiNbO3 crystals is performed in order to determine optimum performance of the crystals in holographic storage and display applications. The discussion covers determination of Fe(2+) and Fe(3+) ion concentrations in Fe:LiNbO3 system from optical absorption and EPR measurements; establishment of the relation between the photorefractive sensitivity of Fe(2+) and Fe(3+) concentrations; study of the spectral dependence, the effect of oxygen annealing, and of other impurities on the photorefractive sensitivity; analysis of the diffraction efficiency curves for different crystals and corresponding sensitivities with the dynamic theory of hologram formation; and determination of the bulk photovoltaic fields as a function of Fe(2+) concentrations. In addition to the absolute Fe(2+) concentration, the relative concentrations of Fe(2+) and Fe(3+) ions are also important in determining the photorefractive sensitivity. There exists an optimal set of crystal characteristics for which the photorefractive sensitivity is most favorable.

  6. Diagnostic Performance of Electronic Syndromic Surveillance Systems in Acute Care

    PubMed Central

    Kashiouris, M.; O’Horo, J.C.; Pickering, B.W.; Herasevich, V.

    2013-01-01

    Context Healthcare Electronic Syndromic Surveillance (ESS) is the systematic collection, analysis and interpretation of ongoing clinical data with subsequent dissemination of results, which aid clinical decision-making. Objective To evaluate, classify and analyze the diagnostic performance, strengths and limitations of existing acute care ESS systems. Data Sources All available to us studies in Ovid MEDLINE, Ovid EMBASE, CINAHL and Scopus databases, from as early as January 1972 through the first week of September 2012. Study Selection: Prospective and retrospective trials, examining the diagnostic performance of inpatient ESS and providing objective diagnostic data including sensitivity, specificity, positive and negative predictive values. Data Extraction Two independent reviewers extracted diagnostic performance data on ESS systems, including clinical area, number of decision points, sensitivity and specificity. Positive and negative likelihood ratios were calculated for each healthcare ESS system. A likelihood matrix summarizing the various ESS systems performance was created. Results The described search strategy yielded 1639 articles. Of these, 1497 were excluded on abstract information. After full text review, abstraction and arbitration with a third reviewer, 33 studies met inclusion criteria, reporting 102,611 ESS decision points. The yielded I2 was high (98.8%), precluding meta-analysis. Performance was variable, with sensitivities ranging from 21% –100% and specificities ranging from 5%-100%. Conclusions There is significant heterogeneity in the diagnostic performance of the available ESS implements in acute care, stemming from the wide spectrum of different clinical entities and ESS systems. Based on the results, we introduce a conceptual framework using a likelihood ratio matrix for evaluation and meaningful application of future, frontline clinical decision support systems. PMID:23874359

  7. Cyclic Crack Growth Testing of an A.O. Smith Multilayer Pressure Vessel with Modal Acoustic Emission Monitoring and Data Assessment

    NASA Technical Reports Server (NTRS)

    Ziola, Steven M.

    2014-01-01

    Digital Wave Corp. (DWC) was retained by Jacobs ATOM at NASA Ames Research Center to perform cyclic pressure crack growth sensitivity testing on a multilayer pressure vessel instrumented with DWC's Modal Acoustic Emission (MAE) system, with captured wave analysis to be performed using DWCs WaveExplorerTM software, which has been used at Ames since 2001. The objectives were to document the ability to detect and characterize a known growing crack in such a vessel using only MAE, to establish the sensitivity of the equipment vs. crack size and / or relevance in a realistic field environment, and to obtain fracture toughness materials properties in follow up testing to enable accurate crack growth analysis. This report contains the results of the testing.

  8. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  9. Performance of FLT-PET for pulmonary lesion diagnosis compared with traditional FDG-PET: A meta-analysis.

    PubMed

    Wang, Zixing; Wang, Yuyan; Sui, Xin; Zhang, Wei; Shi, Ruihong; Zhang, Yingqiang; Dang, Yonghong; Qiao, Zhen; Zhang, Biao; Song, Wei; Jiang, Jingmei

    2015-07-01

    Widely used (18)F 2'-deoxy-2'-fluoro-d-glucose (FDG) positron emission tomography (PET) can be problematic with false positives in cancer imaging. This study aims to investigate the diagnostic accuracy of a candidate PET tracer, (18)F 2',3'-dideoxy-3'-fluoro-2-thiothymidine (FLT), in diagnosing pulmonary lesions compared with FDG. After comprehensive search and study selection, a meta-analysis was performed on data from 548 patients pooled from 17 studies for evaluating FLT accuracy, in which data from 351 patients pooled from ten double-tracer studies was used for direct comparison with FDG. Weighted sensitivity and specificity were used as main indicators of test performance. Individual data was extracted and patient subgroup analyses were performed. Overall, direct comparisons showed lower sensitivity (0.80 vs. 0.89) yet higher specificity (0.82 vs. 0.66) for FLT compared with FDG (both p<0.01). Patient subgroup analysis showed FLT was less sensitive than FDG in detecting lung cancers staged as T1 or T2, and those ≤2.0 cm in diameter (0.81 vs. 0.93, and 0.53 vs. 0.78, respectively, both p<0.05), but was comparable for cancers staged as T3 or T4, and those >2.0 cm in diameter (0.95 vs. 1.00, 0.96 vs. 0.88, both p>0.05). For benignities, FLT performed better compared with FDG in ruling out inflammation-based lesions (0.57 vs. 0.32, p<0.05), and demonstrated greater specificity regardless of lesion sizes. Although FLT cannot replace FDG in detecting small and early lung cancers, it may help to prevent patients with larger or inflammatory lesions from cancer misdiagnosis or even over-treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  10. Urinary neutrophil gelatinase-associated lipocalin for diagnosis and estimating activity in lupus nephritis: a meta-analysis.

    PubMed

    Fang, Y G; Chen, N N; Cheng, Y B; Sun, S J; Li, H X; Sun, F; Xiang, Y

    2015-12-01

    Urinary neutrophil gelatinase-associated lipocalin (uNGAL) is relatively specific in lupus nephritis (LN) patients. However, its diagnostic value has not been evaluated. The aim of this review was to determine the value of uNGAL for diagnosis and estimating activity in LN. A comprehensive search was performed on PubMed, EMBASE, Web of Knowledge, Cochrane electronic databases through December 2014. Meta-analysis of sensitivity and specificity was performed with a random-effects model. Additionally, summary receiver operating characteristic (SROC) curves and area under the curve (AUC) values were calculated. Fourteen studies were selected for this review. With respect to diagnosing LN, the pooled sensitivity and specificity were 73.6% (95% confidence interval (CI), 61.9-83.3) and 78.1% (95% CI, 69.0-85.6), respectively. The SROC-AUC value was 0.8632. Regarding estimating LN activity, the pooled sensitivity and specificity were 66.2% (95% CI, 60.4-71.7) and 62.1% (95% CI, 57.9-66.3), respectively. The SROC-AUC value was 0.7583. In predicting renal flares, the pooled sensitivity and specificity were 77.5% (95% CI, 68.1-85.1) and 65.3% (95% CI, 60.0-70.3), respectively. The SROC-AUC value was 0.7756. In conclusion, this meta-analysis indicates that uNGAL has relatively fair sensitivity and specificity in diagnosing LN, estimating LN activity and predicting renal flares, suggesting that uNGAL is a potential biomarker in diagnosing LN and monitoring LN activity. © The Author(s) 2015.

  11. The Effects of Phonetic Similarity and List Length on Children's Sound Categorization Performance.

    ERIC Educational Resources Information Center

    Snowling, Margaret J.; And Others

    1994-01-01

    Examined the phonological analysis and verbal working memory components of the sound categorization task and their relationships to reading skill differences. Children were tested on sound categorization by having them identify odd words in sequences. Sound categorization performance was sensitive to individual differences in speech perception…

  12. Sensitivity and Specificity of CT and Its signs for Diagnosis of Strangulation in Patients with Acute Small Bowel Obstruction.

    PubMed

    Jha, Ashwini Kumar; Tang, Wen Hao; Bai, Zhi Bin; Xiao, Jia Quan

    2014-01-01

    To perform a meta-analysis to review the sensitivity and specificity of computed tomography and different known computed yomography signs for the diagnosis of strangulation in patients with acute small bowel obstruction. A comprehensive Pubmed search was performed for all reports that evaluated the use of CT and discussed different CT criteria for the diagnosis of acute SBO. Articles published in English language from January 1978 to June 2008 were included. Review articles, case reports, pictorial essays and articles without original data were excluded. The bivariate random effect model was used to obtain pooled sensitivity and pooled specificity. Summary receiver operating curve was calculated using Meta-Disc. Software Openbugs 3.0.3 was used to summarize the data. A total of 12 studies fulfilled the inclusion criteria. The pooled sensitivity and specificity of CT in the diagnosis of strangulation was 0.720 (95% CI 0.674 to 0.763) and 0.866 (95% CI 0.837 to 0.892) respectively. Among different CT signs, mesenteric edema had highest Pooled sensitivity of 0. 741 and lack of bowel wall enhancement had highest pooled specificity of 0.991. This review demonstrates that CT is highly sensitive as well as specific in the preoperative diagnosis of strangulation SBO which are in accordance with the published studies. Our analysis also shows that "presence of mesenteric fluid" is most sensitive, and "lack of bowel wall enhancement" is most specific CT sign of strangulation, and also justifies need of large scale prospective studies to validate the results obtained as well as to determine a clinical protocol.

  13. Performance of the 2012 Systemic Lupus International Collaborating Clinics classification criteria versus the 1997 American College of Rheumatology classification criteria in adult and juvenile systemic lupus erythematosus. A systematic review and meta-analysis.

    PubMed

    Hartman, Esther A R; van Royen-Kerkhof, Annet; Jacobs, Johannes W G; Welsing, Paco M J; Fritsch-Stork, Ruth D E

    2018-03-01

    To evaluate the performance in classifying systemic lupus erythematosus by the 2012 Systemic Lupus International Collaborating Clinics criteria (SLICC'12), versus the revised American College of Rheumatology criteria from 1997 (ACR'97) in adult and juvenile SLE patients. A systematic literature search was conducted in PubMed and Embase for studies comparing SLICC'12 and ACR'97 with clinical diagnosis. A meta-analysis was performed to estimate the sensitivity and specificity of SLICC'12 and ACR'97. To assess classification earlier in the disease by either set, sensitivity and specificity were compared for patients with disease duration <5years. Sensitivity and specificity of individual criteria items were also assessed. In adult SLE (nine studies: 5236 patients, 1313 controls), SLICC'12 has higher sensitivity (94.6% vs. 89.6%) and similar specificity (95.5% vs. 98.1%) compared to ACR'97. For juvenile SLE (four studies: 568 patients, 339 controls), SLICC'12 demonstrates higher sensitivity (99.9% vs. 84.3%) than ACR'97, but much lower specificity (82.0% vs. 94.1%). SLICC'12 classifies juvenile SLE patients earlier in disease course. Individual items contributing to diagnostic accuracy are low complement, anti-ds DNA and acute cutaneous lupus in SLICC'12, and the immunologic and hematologic disorder in ACR'97. Based on sensitivity and specificity SLICC'12 is best for adult SLE. Following the view that higher specificity, i.e. avoidance of false positives, is preferable, ACR'97 is best for juvenile SLE even if associated with lower sensitivity. Our results on the contribution of the individual items of SLICC'12 and ACR´97 may be of value in future efforts to update classification criteria. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Analysis of the causes of discrepancies in troponin I concentrations as measured by ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cTnI.

    PubMed

    Kondo, Takashi; Kobayashi, Daisuke; Mochizuki, Maki; Asanuma, Kouichi; Takahashi, Satoshi

    2017-01-01

    Background Recently developed reagents for the highly sensitive measurement of cardiac troponin I are useful for early diagnosis of acute coronary syndrome. However, differences in measured values between these new reagents and previously used reagents have not been well studied. In this study, we aimed to compare the values between ARCHITECT High-Sensitive Troponin I ST (newly developed reagents), ARCHITECT Troponin I ST and STACIA CLEIA cardiac troponin I (two previously developed reagent kits). Methods Gel filtration high-performance liquid chromatography was used to analyse the causes of differences in measured values. Results The measured values differed between ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cardiac troponin I reagents (r = 0.82). Cross-reactivity tests using plasma with added skeletal-muscle troponin I resulted in higher reactivity (2.17-3.03%) for the STACIA CLEIA cardiac troponin I reagents compared with that for the ARCHITECT High-Sensitive Troponin I ST reagents (less than 0.014%). In addition, analysis of three representative samples using gel filtration high-performance liquid chromatography revealed reagent-specific differences in the reactivity against each cardiac troponin I complex; this could explain the differences in values observed for some of the samples. Conclusion The newly developed ARCHITECT High-Sensitive Troponin I ST reagents were not affected by the presence of skeletal-muscle troponin I in the blood and may be useful for routine examinations.

  15. Cost-effectiveness of digital subtraction angiography in the setting of computed tomographic angiography negative subarachnoid hemorrhage.

    PubMed

    Jethwa, Pinakin R; Punia, Vineet; Patel, Tapan D; Duffis, E Jesus; Gandhi, Chirag D; Prestigiacomo, Charles J

    2013-04-01

    Recent studies have documented the high sensitivity of computed tomography angiography (CTA) in detecting a ruptured aneurysm in the presence of acute subarachnoid hemorrhage (SAH). The practice of digital subtraction angiography (DSA) when CTA does not reveal an aneurysm has thus been called into question. We examined this dilemma from a cost-effectiveness perspective by using current decision analysis techniques. A decision tree was created with the use of TreeAge Pro Suite 2012; in 1 arm, a CTA-negative SAH was followed up with DSA; in the other arm, patients were observed without further imaging. Based on literature review, costs and utilities were assigned to each potential outcome. Base-case and sensitivity analyses were performed to determine the cost-effectiveness of each strategy. A Monte Carlo simulation was then conducted by sampling each variable over a plausible distribution to evaluate the robustness of the model. With the use of a negative predictive value of 95.7% for CTA, observation was found to be the most cost-effective strategy ($6737/Quality Adjusted Life Year [QALY] vs $8460/QALY) in the base-case analysis. One-way sensitivity analysis demonstrated that DSA became the more cost-effective option if the negative predictive value of CTA fell below 93.72%. The Monte Carlo simulation produced an incremental cost-effectiveness ratio of $83 083/QALY. At the conventional willingness-to-pay threshold of $50 000/QALY, observation was the more cost-effective strategy in 83.6% of simulations. The decision to perform a DSA in CTA-negative SAH depends strongly on the sensitivity of CTA, and therefore must be evaluated at each center treating these types of patients. Given the high sensitivity of CTA reported in the current literature, performing DSA on all patients with CTA negative SAH may not be cost-effective at every institution.

  16. Identifying key sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment.

    PubMed

    Sweetapple, Christine; Fu, Guangtao; Butler, David

    2013-09-01

    This study investigates sources of uncertainty in the modelling of greenhouse gas emissions from wastewater treatment, through the use of local and global sensitivity analysis tools, and contributes to an in-depth understanding of wastewater treatment modelling by revealing critical parameters and parameter interactions. One-factor-at-a-time sensitivity analysis is used to screen model parameters and identify those with significant individual effects on three performance indicators: total greenhouse gas emissions, effluent quality and operational cost. Sobol's method enables identification of parameters with significant higher order effects and of particular parameter pairs to which model outputs are sensitive. Use of a variance-based global sensitivity analysis tool to investigate parameter interactions enables identification of important parameters not revealed in one-factor-at-a-time sensitivity analysis. These interaction effects have not been considered in previous studies and thus provide a better understanding wastewater treatment plant model characterisation. It was found that uncertainty in modelled nitrous oxide emissions is the primary contributor to uncertainty in total greenhouse gas emissions, due largely to the interaction effects of three nitrogen conversion modelling parameters. The higher order effects of these parameters are also shown to be a key source of uncertainty in effluent quality. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Xpert MTB/RIF Assay for Pulmonary Tuberculosis and Rifampicin Resistance in Children: a Meta-Analysis.

    PubMed

    Wang, X W; Pappoe, F; Huang, Y; Cheng, X W; Xu, D F; Wang, H; Xu, Y H

    2015-01-01

    The Xpert MTB/RIF assay has been recommended by WHO to replace conventional microscopy, culture, and drug resistance tests. It simultaneously detects both Mycobacterium tuberculosis infection (TB) and resistance to rifampicin (RIF) within two hours. The objective was to review the available research studies on the accuracy of the Xpert MTB/RIF assay for diagnosing pulmonary TB and RIF-resistance in children. A comprehensive search of Pubmed and Embase was performed up to October 28, 2014. We identified published articles estimating the diagnostic accuracy of the Xpert MTB/RIF assay in children with or without HIV using culture or culture plus clinical TB as standard reference. QUADAS-2 tool was used to evaluate the quality of the studies. A summary estimation for sensitivity, specificity, diagnostic odds ratios (DOR), and the area under the summary ROC curve (AUC) was performed. Meta-analysis was used to establish the overall accuracy. 11 diagnostic studies with 3801 patients were included in the systematic review. The overall analysis revealed a moderate sensitivity and high specificity of 65% (95% CI: 61 - 69%) and 99% (95% CI: 98 - 99%), respectively, and a pooled diagnostic odds ratio of 164.09 (95% CI: 111.89 - 240.64). The AUC value was found to be 0.94. The pooled sensitivity and specificity for paediatric rifampicin resistance were 94.0% (95% CI: 80.0 - 93.0%) and 99.0% (95% CI: 95.0 - 98.0%), respectively. Hence, the Xpert MTB/RIF assay has good diagnostic and rifampicin performance for paediatric pulmonary tuberculosis. The Xpert MTB/RIF is sensitive and specific for diagnosing paediatric pulmonary TB. It is also effective in detecting rifamnicin resistance. It can, therefore, be used as an initial diagnostic tool.

  18. Performance of FDG-PET/CT in solitary pulmonary nodule based on pre-test likelihood of malignancy: results from the ITALIAN retrospective multicenter trial.

    PubMed

    Evangelista, Laura; Cuocolo, Alberto; Pace, Leonardo; Mansi, Luigi; Del Vecchio, Silvana; Miletto, Paolo; Sanfilippo, Silvia; Pellegrino, Sara; Guerra, Luca; Pepe, Giovanna; Peluso, Giuseppina; Salvatore, Marco; Galicchio, Rosj; Zuffante, Michele; Annunziata, Salvatore; Farsad, Mohsen; Chiaravalloti, Agostino; Spadafora, Marco

    2018-05-07

    The aim of this study was to determine the performance of 18 F-FDG-PET/CT in patients with solitary pulmonary nodule (SPN), stratifying the risk according to the likelihood of pulmonary malignancy. FDG-PET/CT of 502 patients, stratified for pre-test cancer risk, were retrospectively analyzed. FDG uptake in SPN was assessed by a 4-point scoring system and semiquantitative analysis using the ratio between SUVmax in SPN and SUVmean in mediastinal blood pool (BP) and between SUVmax in SPN and SUVmean in liver (L). Histopathology and/or follow-up data were used as standard of reference. SPN was malignant in 180 (36%) patients, benign in 175 (35%), and indeterminate in 147 (29%). The 355 patients with a definitive SPN nature (malignant or benign) were considered for the analysis. Considering FDG uptake ≥ 2, sensitivity, specificity, positive (PPV) and negative (NPV) predictive values, and accuracy were 85.6%, 85.7%, 86%, 85.2%, and 85.6% respectively. Sensitivity and PPV were higher (P < 0.05) in intermediate and high-risk patients, while specificity and NPV were higher (P < 0.05) in low-risk patients. On receiver operating characteristic curve analysis, the cut-offs for better discrimination between benign and malignant SPN were 1.56 (sensitivity 81% and specificity 87%) and 1.12 (sensitivity 81% and specificity 86%) for SUVmax/SUVmeanBP and SUVmax/SUVmeanL respectively. In intermediate and high-risk patients, including the SUVmax/SUVmeanBP, the specificity shifted from 85% and 50% to 100%. Visual FDG-PET/CT has an acceptable performance in patients with SPN, but accuracy improves when SUVratios are considered, particularly in patients with intermediate and high risk of malignancy.

  19. NAVIGATION PERFORMANCE IN HIGH EARTH ORBITS USING NAVIGATOR GPS RECEIVER

    NASA Technical Reports Server (NTRS)

    Bamford, William; Naasz, Bo; Moreau, Michael C.

    2006-01-01

    NASA GSFC has developed a GPS receiver that can acquire and track GPS signals with sensitivity significantly lower than conventional GPS receivers. This opens up the possibility of using GPS based navigation for missions in high altitude orbit, such as Geostationary Operational Environmental Satellites (GOES) in a geostationary orbit, and the Magnetospheric MultiScale (MMS) Mission, in highly eccentric orbits extending to 12 Earth radii and higher. Indeed much research has been performed to study the feasibility of using GPS navigation in high Earth orbits and the performance achievable. Recently, GSFC has conducted a series of hardware in-the-loop tests to assess the performance of this new GPS receiver in various high Earth orbits of interest. Tracking GPS signals to down to approximately 22-25 dB-Hz, including signals from the GPS transmitter side-lobes, steady-state navigation performance in a geostationary orbit is on the order of 10 meters. This paper presents the results of these tests, as well as sensitivity analysis to such factors as ionosphere masks, use of GPS side-lobe signals, and GPS receiver sensitivity.

  20. The diagnostic performance of perfusion MRI for differentiating glioma recurrence from pseudoprogression: A meta-analysis.

    PubMed

    Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo

    2017-03-01

    The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.

  1. Sensitivity analysis of helicopter IMC decelerating steep approach and landing performance to navigation system parameters

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Results of a study to investigate, by means of a computer simulation, the performance sensitivity of helicopter IMC DSAL operations as a function of navigation system parameters are presented. A mathematical model representing generically a navigation system is formulated. The scenario simulated consists of a straight in helicopter approach to landing along a 6 deg glideslope. The deceleration magnitude chosen is 03g. The navigation model parameters are varied and the statistics of the total system errors (TSE) computed. These statistics are used to determine the critical navigation system parameters that affect the performance of the closed-loop navigation, guidance and control system of a UH-1H helicopter.

  2. Bacteriophage-based assays for the rapid detection of rifampicin resistance in Mycobacterium tuberculosis: a meta-analysis.

    PubMed

    Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L

    2005-10-01

    To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.

  3. Performance of the dipstick screening test as a predictor of negative urine culture

    PubMed Central

    Marques, Alexandre Gimenes; Doi, André Mario; Pasternak, Jacyr; Damascena, Márcio dos Santos; França, Carolina Nunes; Martino, Marinês Dalla Valle

    2017-01-01

    ABSTRACT Objective To investigate whether the urine dipstick screening test can be used to predict urine culture results. Methods A retrospective study conducted between January and December 2014 based on data from 8,587 patients with a medical order for urine dipstick test, urine sediment analysis and urine culture. Sensitivity, specificity, positive and negative predictive values were determined and ROC curve analysis was performed. Results The percentage of positive cultures was 17.5%. Nitrite had 28% sensitivity and 99% specificity, with positive and negative predictive values of 89% and 87%, respectively. Leukocyte esterase had 79% sensitivity and 84% specificity, with positive and negative predictive values of 51% and 95%, respectively. The combination of positive nitrite or positive leukocyte esterase tests had 85% sensitivity and 84% specificity, with positive and negative predictive values of 53% and 96%, respectively. Positive urinary sediment (more than ten leukocytes per microliter) had 92% sensitivity and 71% specificity, with positive and negative predictive values of 40% and 98%, respectively. The combination of nitrite positive test and positive urinary sediment had 82% sensitivity and 99% specificity, with positive and negative predictive values of 91% and 98%, respectively. The combination of nitrite or leukocyte esterase positive tests and positive urinary sediment had the highest sensitivity (94%) and specificity (84%), with positive and negative predictive values of 58% and 99%, respectively. Based on ROC curve analysis, the best indicator of positive urine culture was the combination of positives leukocyte esterase or nitrite tests and positive urinary sediment, followed by positives leukocyte and nitrite tests, positive urinary sediment alone, positive leukocyte esterase test alone, positive nitrite test alone and finally association of positives nitrite and urinary sediment (AUC: 0.845, 0.844, 0.817, 0.814, 0.635 and 0.626, respectively). Conclusion A negative urine culture can be predicted by negative dipstick test results. Therefore, this test may be a reliable predictor of negative urine culture. PMID:28444086

  4. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  5. Systematic parameter estimation and sensitivity analysis using a multidimensional PEMFC model coupled with DAKOTA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Chao Yang; Luo, Gang; Jiang, Fangming

    2010-05-01

    Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated inmore » order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.« less

  6. IATA for skin sensitization potential – 1 out of 2 or 2 out of 3? ...

    EPA Pesticide Factsheets

    To meet EU regulatory requirements and to avoid or minimize animal testing, there is a need for non-animal methods to assess skin sensitization potential. Given the complexity of the skin sensitization endpoint, there is an expectation that integrated testing and assessment approaches (IATA) will need to be developed which rely on assays representing key events in the pathway. Three non-animal assays have been formally validated: the direct peptide reactivity assay (DPRA), the KeratinoSensTM assay and the h-CLAT assay. At the same time, there have been many efforts to develop IATA with the “2 out of 3” approach attracting much attention whereby a chemical is classified on the basis of the majority outcome. A set of 271 chemicals with mouse, human and non-animal sensitization test data was evaluated to compare the predictive performances of the 3 individual non-animal assays, their binary combinations and the ‘2 out of 3’ approach. The analysis revealed that the most predictive approach was to use both the DPRA and h-CLAT: 1. Perform DPRA – if positive, classify as a sensitizer; 2. If negative, perform h-CLAT – a positive outcome denotes a sensitizer, a negative, a non-sensitizer. With this approach, 83% (LLNA) and 93% (human) of the non-sensitizer predictions were correct, in contrast to the ‘2 out of 3’ approach which had 69% (LLNA) and 79% (human) of non-sensitizer predictions correct. The views expressed are those of the authors and do not ne

  7. Validation and Implementation of BRCA1/2 Variant Screening in Ovarian Tumor Tissue.

    PubMed

    de Jonge, Marthe M; Ruano, Dina; van Eijk, Ronald; van der Stoep, Nienke; Nielsen, Maartje; Wijnen, Juul T; Ter Haar, Natalja T; Baalbergen, Astrid; Bos, Monique E M M; Kagie, Marjolein J; Vreeswijk, Maaike P G; Gaarenstroom, Katja N; Kroep, Judith R; Smit, Vincent T H B M; Bosse, Tjalling; van Wezel, Tom; van Asperen, Christi J

    2018-06-21

    BRCA1/2 variant analysis in tumor tissue could streamline the referral of patients with epithelial ovarian, fallopian tube, or primary peritoneal cancer to genetic counselors and select patients who benefit most from targeted treatment. We investigated the sensitivity of BRCA1/2 variant analysis in formalin-fixed, paraffin-embedded tumor tissue using a combination of next-generation sequencing and copy number variant multiplex ligation-dependent probe amplification. After optimization using a training cohort of known BRCA1/2 mutation carriers, validation was performed in a prospective cohort (Clinical implementation Of BRCA1/2 screening in ovarian tumor tissue: COBRA-cohort) in which screening of BRCA1/2 tumor DNA and leukocyte germline DNA was performed in parallel. BRCA1 promoter hypermethylation and pedigree analysis were also performed. In the training cohort 45 of 46 germline BRCA1/2 variants were detected (sensitivity 98%). In the COBRA cohort (n=62), all six germline variants were identified (sensitivity 100%), together with five somatic BRCA1/2 variants and eight cases with BRCA1 promoter hypermethylation. In four BRCA1/2 variant-negative patients, surveillance or prophylactic management options were offered based on positive family histories. We conclude that BRCA1/2 formalin-fixed, paraffin-embedded tumor tissue analysis reliably detects BRCA1/2 variants. When taking family history of BRCA1/2 variant-negative patients into account, tumor BRCA1/2 variant screening allows more efficient selection of epithelial ovarian cancer patients for genetic counselling and simultaneously selects patients who benefit most from targeted treatment. Copyright © 2018. Published by Elsevier Inc.

  8. Breast-specific gamma camera imaging with 99mTc-MIBI has better diagnostic performance than magnetic resonance imaging in breast cancer patients: A meta-analysis.

    PubMed

    Zhang, Aimi; Li, Panli; Liu, Qiufang; Song, Shaoli

    2017-01-01

    This study aimed to evaluate the diagnostic role of breast-specific gamma camera imaging (BSGI) with technetium-99m-methoxy isobutyl isonitrile ( 99m Tc-MIBI) and magnetic resonance imaging (MRI) in patients with breast cancer through a meta-analysis. Three reviewers searched articles published in medical journals before June 2016 in MEDLINE, EMBASE and Springer Databases; the references listed in original articles were also retrieved. We used the quality assessment of diagnostic accuracy studies (QUADAS) tool to assess the quality of the included studies. Heterogeneity, pooled sensitivity and specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR) and summary receiver operating characteristic (SROC) curves were calculated by Meta-DiSc software to estimate the diagnostic performance of BSGI and MRI. Ten studies with 517 patients were included after meeting the inclusion criteria. We did a subgroup analysis of the same data type. The pooled sensitivities of BSGI and MRI were: 0.84 (95% CI, 0.79-0.88) and 0.89 (95% CI, 0.84-0.92) respectively, and the pooled specificities of BSGI and MRI were: 0.82 (95% CI, 0.74-0.88) and 0.39 (95% CI, 0.30-0.49) respectively. The areas under the SROC curve of BSGI and MRI were 0.93 and 0.72 respectively. The results of our meta-analysis indicated that compared with MRI, BSGI has similar sensitivity, higher specificity, better diagnostic performance, and can be widely used in clinical practice.

  9. Falls Risk Prediction for Older Inpatients in Acute Care Medical Wards: Is There an Interest to Combine an Early Nurse Assessment and the Artificial Neural Network Analysis?

    PubMed

    Beauchet, O; Noublanche, F; Simon, R; Sekhon, H; Chabot, J; Levinoff, E J; Kabeshova, A; Launay, C P

    2018-01-01

    Identification of the risk of falls is important among older inpatients. This study aims to examine performance criteria (i.e.; sensitivity, specificity, positive predictive value, negative predictive value and accuracy) for fall prediction resulting from a nurse assessment and an artificial neural networks (ANNs) analysis in older inpatients hospitalized in acute care medical wards. A total of 848 older inpatients (mean age, 83.0±7.2 years; 41.8% female) admitted to acute care medical wards in Angers University hospital (France) were included in this study using an observational prospective cohort design. Within 24 hours after admission of older inpatients, nurses performed a bedside clinical assessment. Participants were separated into non-fallers and fallers (i.e.; ≥1 fall during hospitalization stay). The analysis was conducted using three feed forward ANNs (multilayer perceptron [MLP], averaged neural network, and neuroevolution of augmenting topologies [NEAT]). Seventy-three (8.6%) participants fell at least once during their hospital stay. ANNs showed a high specificity, regardless of which ANN was used, and the highest value reported was with MLP (99.8%). In contrast, sensitivity was lower, with values ranging between 98.4 to 14.8%. MLP had the highest accuracy (99.7). Performance criteria for fall prediction resulting from a bedside nursing assessment and an ANNs analysis was associated with a high specificity but a low sensitivity, suggesting that this combined approach should be used more as a diagnostic test than a screening test when considering older inpatients in acute care medical ward.

  10. Analysis of ecologically relevant pharmaceuticals in wastewater and surface water using selective solid phase extraction and UPLC/MS/MS

    EPA Science Inventory

    A rapid and sensitive method has been developed for the analysis of 48 human prescription active pharmaceutical ingredients (APIs) and 6 metabolites of interest, utilizing selective solid-phase extraction (SPE) and ultra performance liquid chromatography in combination with tripl...

  11. Adaptation of an urban land surface model to a tropical suburban area: Offline evaluation, sensitivity analysis, and optimization of TEB/ISBA (SURFEX)

    NASA Astrophysics Data System (ADS)

    Harshan, Suraj

    The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.

  12. Thermal effects on nonlinear vibration of a carbon nanotube-based mass sensor using finite element analysis

    NASA Astrophysics Data System (ADS)

    Kang, Dong-Keun; Kim, Chang-Wan; Yang, Hyun-Ik

    2017-01-01

    In the present study we carried out a dynamic analysis of a CNT-based mass sensor by using a finite element method (FEM)-based nonlinear analysis model of the CNT resonator to elucidate the combined effects of thermal effects and nonlinear oscillation behavior upon the overall mass detection sensitivity. Mass sensors using carbon nanotube (CNT) resonators provide very high sensing performance. Because CNT-based resonators can have high aspect ratios, they can easily exhibit nonlinear oscillation behavior due to large displacements. Also, CNT-based devices may experience high temperatures during their manufacture and operation. These geometrical nonlinearities and temperature changes affect the sensing performance of CNT-based mass sensors. However, it is very hard to find previous literature addressing the detection sensitivity of CNT-based mass sensors including considerations of both these nonlinear behaviors and thermal effects. We modeled the nonlinear equation of motion by using the von Karman nonlinear strain-displacement relation, taking into account the additional axial force associated with the thermal effect. The FEM was employed to solve the nonlinear equation of motion because it can effortlessly handle the more complex geometries and boundary conditions. A doubly clamped CNT resonator actuated by distributed electrostatic force was the configuration subjected to the numerical experiments. Thermal effects upon the fundamental resonance behavior and the shift of resonance frequency due to attached mass, i.e., the mass detection sensitivity, were examined in environments of both high and low (or room) temperature. The fundamental resonance frequency increased with decreasing temperature in the high temperature environment, and increased with increasing temperature in the low temperature environment. The magnitude of the shift in resonance frequency caused by an attached mass represents the sensing performance of a mass sensor, i.e., its mass detection sensitivity, and it can be seen that this shift is affected by the temperature change and the amount of electrostatic force. The thermal effects on the mass detection sensitivity are intensified in the linear oscillation regime and increase with increasing CNT length; this intensification can either improve or worsen the detection sensitivity.

  13. SU-C-201-04: Quantification of Perfusion Heterogeneity Based On Texture Analysis for Fully Automatic Detection of Ischemic Deficits From Myocardial Perfusion Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fang, Y; Huang, H; Su, T

    Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less

  14. Temperature sensitivity analysis of polarity controlled electrostatically doped tunnel field-effect transistor

    NASA Astrophysics Data System (ADS)

    Nigam, Kaushal; Pandey, Sunil; Kondekar, P. N.; Sharma, Dheeraj

    2016-09-01

    The conventional tunnel field-effect transistors (TFETs) have shown potential to scale down in sub-22 nm regime due to its lower sub-threshold slope and robustness against short-channel effects (SCEs), however, sensitivity towards temperature variation is a major concern. Therefore, for the first time, we investigate temperature sensitivity analysis of a polarity controlled electrostatically doped tunnel field-effect transistor (ED-TFET). Different performance metrics and analog/RF figure-of-merits were considered and compared for both devices, and simulations were performed using Silvaco ATLAS device tool. We found that the variation in ON-state current in ED-TFET is almost temperature independent due to electrostatically doped mechanism, while, it increases in conventional TFET at higher temperature. Above room temperature, the variation in ION, IOFF, and SS sensitivity in ED-TFET are only 0.11%/K, 2.21%/K, and 0.63%/K, while, in conventional TFET the variations are 0.43%/K, 2.99%/K, and 0.71%/K, respectively. However, below room temperature, the variation in ED-TFET ION is 0.195%/K compared to 0.27%/K of conventional TFET. Moreover, it is analysed that the incomplete ionization effect in conventional TFET severely affects the drive current and the threshold voltage, while, ED-TFET remains unaffected. Hence, the proposed ED-TFET is less sensitive towards temperature variation and can be used for cryogenics as well as for high temperature applications.

  15. Performance of procalcitonin in diagnosing parapneumonic pleural effusions: A clinical study and meta-analysis.

    PubMed

    He, Chao; Wang, Bo; Li, Danni; Xu, Huan; Shen, Yongchun

    2017-08-01

    Parapneumonic pleural effusion (PPE) is a common complication of pneumonia. The accurate diagnosis of PPE remains a challenge. Recent studies suggest that procalcitonin (PCT) emerges as a potential biomarker for PPE. Our study aimed to determine the diagnostic value of PCT for PPE by a clinical study and summarize the overall diagnostic performance of PCT through a meta-analysis. Demographic and clinical data of the patients with PPE and controls were collected in our clinical study. The diagnostic performances of serum PCT (s-PCT) were analyzed via receiver operating characteristic (ROC) curve analysis, using area under the curve (AUC) as a measure of accuracy. Literature databases were systematically searched for the studies examining the accuracy of PCT for diagnosing PPE. Data on sensitivity, specificity, positive/negative likelihood ratio (PLR/NLR), and diagnostic odds ratio (DOR) were pooled. Summary ROC curves and AUC were used to evaluate overall test performance. In our clinical study, 47 patients with PPE and 101 controls were included. The s-PCT levels were significantly increased in the setting of PPE (5.44 ± 9.82 ng/mL) compared with malignant PE (0.15 ± 0.19 ng/mL), tuberculous PE (0.18 ± 0.16 ng/mL), and transudates (0.09 ± 0.03 ng/mL) (P < .001). Using a cutoff value of 0.195 ng/mL, the sensitivity and specificity of s-PCT in diagnosing PPE were 0.83 and 0.80, respectively, and AUC was 0.89. In addition, 11 studies were included in our meta-analysis. Summary performance estimates for s-PCT in diagnosing PPE were as follows: sensitivity, 0.78 (95% CI: 0.71-0.84); specificity, 0.74 (95% CI: 0.69-0.78); PLR, 3.46 (95% CI: 2.09-5.74); NLR, 0.27 (95% CI: 0.14-0.54); DOR, 12.37 (95% CI: 4.34-41.17); and AUC, 0.84. The corresponding estimates for p-PCT were as follows: sensitivity, 0.62 (95% CI: 0.57-0.67); specificity, 0.71 (95% CI: 0.68-0.75); PLR 2.31 (95% CI: 1.81-2.95); NLR, 0.47 (95% CI: 0.35-0.63); DOR, 5.48 (95% CI: 3.07-9.77); and AUC, 0.80. Both s-PCT and p-PCT might have modest performance in diagnosing PPE. However, more studies on a large scale should be performed to confirm our findings.

  16. High mass resolution time of flight mass spectrometer for measuring products in heterogeneous catalysis in highly sensitive microreactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, T.; Jensen, R.; Christensen, M. K.

    2012-07-15

    We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/{Delta}m > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detectionmore » by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH{sub 3}.« less

  17. High mass resolution time of flight mass spectrometer for measuring products in heterogeneous catalysis in highly sensitive microreactors

    NASA Astrophysics Data System (ADS)

    Andersen, T.; Jensen, R.; Christensen, M. K.; Pedersen, T.; Hansen, O.; Chorkendorff, I.

    2012-07-01

    We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/Δm > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detection by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH3.

  18. High mass resolution time of flight mass spectrometer for measuring products in heterogeneous catalysis in highly sensitive microreactors.

    PubMed

    Andersen, T; Jensen, R; Christensen, M K; Pedersen, T; Hansen, O; Chorkendorff, I

    2012-07-01

    We demonstrate a combined microreactor and time of flight system for testing and characterization of heterogeneous catalysts with high resolution mass spectrometry and high sensitivity. Catalyst testing is performed in silicon-based microreactors which have high sensitivity and fast thermal response. Gas analysis is performed with a time of flight mass spectrometer with a modified nude Bayard-Alpert ionization gauge as gas ionization source. The mass resolution of the time of flight mass spectrometer using the ion gauge as ionization source is estimated to m/Δm > 2500. The system design is superior to conventional batch and flow reactors with accompanying product detection by quadrupole mass spectrometry or gas chromatography not only due to the high sensitivity, fast temperature response, high mass resolution, and fast acquisition time of mass spectra but it also allows wide mass range (0-5000 amu in the current configuration). As a demonstration of the system performance we present data from ammonia oxidation on a Pt thin film showing resolved spectra of OH and NH(3).

  19. Sensitivity of Combustion-Acoustic Instabilities to Boundary Conditions for Premixed Gas Turbine Combustors

    NASA Technical Reports Server (NTRS)

    Darling, Douglas; Radhakrishnan, Krishnan; Oyediran, Ayo

    1995-01-01

    Premixed combustors, which are being considered for low NOx engines, are susceptible to instabilities due to feedback between pressure perturbations and combustion. This feedback can cause damaging mechanical vibrations of the system as well as degrade the emissions characteristics and combustion efficiency. In a lean combustor instabilities can also lead to blowout. A model was developed to perform linear combustion-acoustic stability analysis using detailed chemical kinetic mechanisms. The Lewis Kinetics and Sensitivity Analysis Code, LSENS, was used to calculate the sensitivities of the heat release rate to perturbations in density and temperature. In the present work, an assumption was made that the mean flow velocity was small relative to the speed of sound. Results of this model showed the regions of growth of perturbations to be most sensitive to the reflectivity of the boundary when reflectivities were close to unity.

  20. Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Haitao, E-mail: liaoht@cae.ac.cn

    The direct differentiation and improved least squares shadowing methods are both developed for accurately and efficiently calculating the sensitivity coefficients of time averaged quantities for chaotic dynamical systems. The key idea is to recast the time averaged integration term in the form of differential equation before applying the sensitivity analysis method. An additional constraint-based equation which forms the augmented equations of motion is proposed to calculate the time averaged integration variable and the sensitivity coefficients are obtained as a result of solving the augmented differential equations. The application of the least squares shadowing formulation to the augmented equations results inmore » an explicit expression for the sensitivity coefficient which is dependent on the final state of the Lagrange multipliers. The LU factorization technique to calculate the Lagrange multipliers leads to a better performance for the convergence problem and the computational expense. Numerical experiments on a set of problems selected from the literature are presented to illustrate the developed methods. The numerical results demonstrate the correctness and effectiveness of the present approaches and some short impulsive sensitivity coefficients are observed by using the direct differentiation sensitivity analysis method.« less

  2. Influences of geological parameters to probabilistic assessment of slope stability of embankment

    NASA Astrophysics Data System (ADS)

    Nguyen, Qui T.; Le, Tuan D.; Konečný, Petr

    2018-04-01

    This article considers influences of geological parameters to slope stability of the embankment in probabilistic analysis using SLOPE/W computational system. Stability of a simple slope is evaluated with and without pore–water pressure on the basis of variation of soil properties. Normal distributions of unit weight, cohesion and internal friction angle are assumed. Monte Carlo simulation technique is employed to perform analysis of critical slip surface. Sensitivity analysis is performed to observe the variation of the geological parameters and their effects on safety factors of the slope stability.

  3. Optimization of Turbine Engine Cycle Analysis with Analytic Derivatives

    NASA Technical Reports Server (NTRS)

    Hearn, Tristan; Hendricks, Eric; Chin, Jeffrey; Gray, Justin; Moore, Kenneth T.

    2016-01-01

    A new engine cycle analysis tool, called Pycycle, was recently built using the OpenMDAO framework. This tool uses equilibrium chemistry based thermodynamics, and provides analytic derivatives. This allows for stable and efficient use of gradient-based optimization and sensitivity analysis methods on engine cycle models, without requiring the use of finite difference derivative approximation methods. To demonstrate this, a gradient-based design optimization was performed on a multi-point turbofan engine model. Results demonstrate very favorable performance compared to an optimization of an identical model using finite-difference approximated derivatives.

  4. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  6. [Performance of cognitive brief test in elderly patients with dementia in advanced stage living in an urban community of Lima, Peru].

    PubMed

    Custodio, Nilton; Alva-Diaz, Carlos; Becerra-Becerra, Yahaira; Montesinos, Rosa; Lira, David; Herrera-Pérez, Eder; Castro-Suárez, Sheila; Cuenca-Alfaro, José; Valeriano-Lorenzo, Elizabeth

    2016-01-01

    Evaluate the performance of clock drawing test- Manos versión (PDR-M) and Mini Mental State Examination -Peruvian version (MMSE) to detect dementia in a sample based on urban community of Lima, Peru. This study is a secondary analysis database, observational, analytical and cross-sectional, the gold standard was the clinical and the neuropsychological evaluations together. Performance testing individually and in combination were evaluated.. Data were obtained from prevalence study conducted in 2008 in Cercado de Lima. MMSE performance for evaluation of patients with dementia of any kind showed sensitivity of 64,1%, specificity of 84,1%, PPV of 24.4%, NPV of 96.7%, PLR of 4,03 and NLR of 0,43. PDR-M showed sensitivity of 89,3%, specificity of 98,1%, PPV of 79.3%, NPV of 99.1%, PLR of 47,79 and NLR of 0,11. When both tests were applied, and at least one of them was positive, they showed sensitivity 98.1%, specificity 84.1%, PPV of 33.1%, NPV of 99.8%, PLR of 6,17 and NLR of 0,02. When performing separate analysis of Alzheimer-type dementia and non- Alzheimer dementia, the values of the parameters do not differ substantially from those obtained for dementia of any kind. The combination of MMSE and PDR-M show good discriminative ability to detect moderate and severe dementia in population living in urban community in Lima.

  7. Genetic diversity analysis of Jatropha curcas L. (Euphorbiaceae) based on methylation-sensitive amplification polymorphism.

    PubMed

    Kanchanaketu, T; Sangduen, N; Toojinda, T; Hongtrakul, V

    2012-04-13

    Genetic analysis of 56 samples of Jatropha curcas L. collected from Thailand and other countries was performed using the methylation-sensitive amplification polymorphism (MSAP) technique. Nine primer combinations were used to generate MSAP fingerprints. When the data were interpreted as amplified fragment length polymorphism (AFLP) markers, 471 markers were scored. All 56 samples were classified into three major groups: γ-irradiated, non-toxic and toxic accessions. Genetic similarity among the samples was extremely high, ranging from 0.95 to 1.00, which indicated very low genetic diversity in this species. The MSAP fingerprint was further analyzed for DNA methylation polymorphisms. The results revealed differences in the DNA methylation level among the samples. However, the samples collected from saline areas and some species hybrids showed specific DNA methylation patterns. AFLP data were used, together with methylation-sensitive AFLP (MS-AFLP) data, to construct a phylogenetic tree, resulting in higher efficiency to distinguish the samples. This combined analysis separated samples previously grouped in the AFLP analysis. This analysis also distinguished some hybrids. Principal component analysis was also performed; the results confirmed the separation in the phylogenetic tree. Some polymorphic bands, involving both nucleotide and DNA methylation polymorphism, that differed between toxic and non-toxic samples were identified, cloned and sequenced. BLAST analysis of these fragments revealed differences in DNA methylation in some known genes and nucleotide polymorphism in chloroplast DNA. We conclude that MSAP is a powerful technique for the study of genetic diversity for organisms that have a narrow genetic base.

  8. Case-Deletion Diagnostics for Maximum Likelihood Multipoint Quantitative Trait Locus Linkage Analysis

    PubMed Central

    Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.

    2009-01-01

    Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086

  9. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach.

    PubMed

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost.

  10. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  11. Optimization Issues with Complex Rotorcraft Comprehensive Analysis

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.; Tarzanin, Frank J.; Hirsh, Joel E.; Young, Darrell K.

    1998-01-01

    This paper investigates the use of the general purpose automatic differentiation (AD) tool called Automatic Differentiation of FORTRAN (ADIFOR) as a means of generating sensitivity derivatives for use in Boeing Helicopter's proprietary comprehensive rotor analysis code (VII). ADIFOR transforms an existing computer program into a new program that performs a sensitivity analysis in addition to the original analysis. In this study both the pros (exact derivatives, no step-size problems) and cons (more CPU, more memory) of ADIFOR are discussed. The size (based on the number of lines) of the VII code after ADIFOR processing increased by 70 percent and resulted in substantial computer memory requirements at execution. The ADIFOR derivatives took about 75 percent longer to compute than the finite-difference derivatives. However, the ADIFOR derivatives are exact and are not functions of step-size. The VII sensitivity derivatives generated by ADIFOR are compared with finite-difference derivatives. The ADIFOR and finite-difference derivatives are used in three optimization schemes to solve a low vibration rotor design problem.

  12. A framework for sensitivity analysis of decision trees.

    PubMed

    Kamiński, Bogumił; Jakubczyk, Michał; Szufel, Przemysław

    2018-01-01

    In the paper, we consider sequential decision problems with uncertainty, represented as decision trees. Sensitivity analysis is always a crucial element of decision making and in decision trees it often focuses on probabilities. In the stochastic model considered, the user often has only limited information about the true values of probabilities. We develop a framework for performing sensitivity analysis of optimal strategies accounting for this distributional uncertainty. We design this robust optimization approach in an intuitive and not overly technical way, to make it simple to apply in daily managerial practice. The proposed framework allows for (1) analysis of the stability of the expected-value-maximizing strategy and (2) identification of strategies which are robust with respect to pessimistic/optimistic/mode-favoring perturbations of probabilities. We verify the properties of our approach in two cases: (a) probabilities in a tree are the primitives of the model and can be modified independently; (b) probabilities in a tree reflect some underlying, structural probabilities, and are interrelated. We provide a free software tool implementing the methods described.

  13. Fast and sensitive trace analysis of malachite green using a surface-enhanced Raman microfluidic sensor.

    PubMed

    Lee, Sangyeop; Choi, Junghyun; Chen, Lingxin; Park, Byungchoon; Kyong, Jin Burm; Seong, Gi Hun; Choo, Jaebum; Lee, Yeonjung; Shin, Kyung-Hoon; Lee, Eun Kyu; Joo, Sang-Woo; Lee, Kyeong-Hee

    2007-05-08

    A rapid and highly sensitive trace analysis technique for determining malachite green (MG) in a polydimethylsiloxane (PDMS) microfluidic sensor was investigated using surface-enhanced Raman spectroscopy (SERS). A zigzag-shaped PDMS microfluidic channel was fabricated for efficient mixing between MG analytes and aggregated silver colloids. Under the optimal condition of flow velocity, MG molecules were effectively adsorbed onto silver nanoparticles while flowing along the upper and lower zigzag-shaped PDMS channel. A quantitative analysis of MG was performed based on the measured peak height at 1615 cm(-1) in its SERS spectrum. The limit of detection, using the SERS microfluidic sensor, was found to be below the 1-2 ppb level and this low detection limit is comparable to the result of the LC-Mass detection method. In the present study, we introduce a new conceptual detection technology, using a SERS microfluidic sensor, for the highly sensitive trace analysis of MG in water.

  14. Preventive behaviors by the level of perceived infection sensitivity during the Korea outbreak of Middle East Respiratory Syndrome in 2015.

    PubMed

    Lee, Soon Young; Yang, Hee Jeong; Kim, Gawon; Cheong, Hae-Kwan; Choi, Bo Youl

    2016-01-01

    This study was performed to investigate the relationship between community residents' infection sensitivity and their levels of preventive behaviors during the 2015 Middle East Respiratory Syndrome (MERS) outbreak in Korea. Seven thousands two hundreds eighty one participants from nine areas in Gyeonggi-do including Pyeongtaek, the origin of the outbreak in 2015 agreed to participate in the survey and the data from 6,739 participants were included in the final analysis. The data on the perceived infection sensitivity were subjected to cluster analysis. The levels of stress, reliability/practice of preventive behaviors, hand washing practice and policy credibility during the outbreak period were analyzed for each cluster. Cluster analysis of infection sensitivity due to the MERS outbreak resulted in classification of participants into four groups: the non-sensitive group (14.5%), social concern group (17.4%), neutral group (29.1%), and overall sensitive group (39.0%). A logistic regression analysis found that the overall sensitive group with high sensitivity had higher stress levels (17.80; 95% confidence interval [CI], 13.77 to 23.00), higher reliability on preventive behaviors (5.81; 95% CI, 4.84 to 6.98), higher practice of preventive behaviors (4.53; 95% CI, 3.83 to 5.37) and higher practice of hand washing (2.71; 95% CI, 2.13 to 3.43) during the outbreak period, compared to the non-sensitive group. Infection sensitivity of community residents during the MERS outbreak correlated with gender, age, occupation, and health behaviors. When there is an outbreak in the community, there is need to maintain a certain level of sensitivity while reducing excessive stress, as well as promote the practice of preventive behaviors among local residents. In particular, target groups need to be notified and policies need to be established with a consideration of the socio-demographic characteristics of the community.

  15. Sensitive magnetic sensors without cooling in biomedical engineering.

    PubMed

    Nowak, H; Strähmel, E; Giessler, F; Rinneberg, G; Haueisen, J

    2003-01-01

    Magnetic field sensors are used in various fields of technology. In the past few years a large variety of magnetic field sensors has been established and the performance of these sensors has been improved enormously. In this review article all recent developments in the area of sensitive magnetic field sensory analysis (resolution better than 1 nT) are presented and examined regarding their parameters. This is mainly done under the aspect of application fields in biomedical engineering. A comparison of all commercial and available sensitive magnetic field sensors shows current and prospective ranges of application.

  16. Iowa gambling task performance in euthymic bipolar I disorder: A meta-analysis and empirical study

    PubMed Central

    Edge, Michael D.; Johnson, Sheri L.; Ng, Tommy; Carver, Charles S.

    2013-01-01

    Background The Iowa Gambling Task (IGT) has been recommended as an index of reward sensitivity, which is elevated in bipolar disorder. We conducted a meta-analysis of IGT performance in euthymic bipolar I disorder compared with control participants. Findings indicated that people with bipolar disorder make more risky choices than control participants, though the effect is small (g=0.35). It is not clear which of the many processes involved in IGT performance are involved in producing the observed group difference. Methods Fifty-five euthymic people with bipolar disorder and 39 control participants completed the IGT. The Expectancy Valence Model was used to examine differences in IGT. We also examined whether variation in IGT performance within the bipolar group was related to current mood, illness course, impulsivity, or demographics. Results Bipolar and control groups did not differ on the total number of risky choices, rate of learning, or any of the parameters of the Expectancy Valence Model. IGT performance in bipolar disorder was not related to any of the examined individual differences. Limitations It is possible that there are group differences that are too small to detect at our sample size or that are not amenable to study via the Expectancy Valence Model. Conclusions We were unable to identify group differences on the IGT or correlates of IGT performance within bipolar disorder. Though the IGT may serve as a useful model for decision-making, its structure may make it unsuitable for behavioral assessment of reward sensitivity independent of punishment sensitivity. PMID:23219060

  17. Performance of Panfungal- and Specific-PCR-Based Procedures for Etiological Diagnosis of Invasive Fungal Diseases on Tissue Biopsy Specimens with Proven Infection: a 7-Year Retrospective Analysis from a Reference Laboratory

    PubMed Central

    Bernal-Martinez, L.; Castelli, M. V.; Rodriguez-Tudela, J. L.; Cuenca-Estrella, M.

    2014-01-01

    A retrospective analysis of real-time PCR (RT-PCR) results for 151 biopsy samples obtained from 132 patients with proven invasive fungal diseases was performed. PCR-based techniques proved to be fast and sensitive and enabled definitive diagnosis in all cases studied, with detection of a total of 28 fungal species. PMID:24574295

  18. Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures

    NASA Technical Reports Server (NTRS)

    Abumeri, Galib H.; Chamis, Christos C.

    2003-01-01

    The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.

  19. Improving the analysis of slug tests

    USGS Publications Warehouse

    McElwee, C.D.

    2002-01-01

    This paper examines several techniques that have the potential to improve the quality of slug test analysis. These techniques are applicable in the range from low hydraulic conductivities with overdamped responses to high hydraulic conductivities with nonlinear oscillatory responses. Four techniques for improving slug test analysis will be discussed: use of an extended capability nonlinear model, sensitivity analysis, correction for acceleration and velocity effects, and use of multiple slug tests. The four-parameter nonlinear slug test model used in this work is shown to allow accurate analysis of slug tests with widely differing character. The parameter ?? represents a correction to the water column length caused primarily by radius variations in the wellbore and is most useful in matching the oscillation frequency and amplitude. The water column velocity at slug initiation (V0) is an additional model parameter, which would ideally be zero but may not be due to the initiation mechanism. The remaining two model parameters are A (parameter for nonlinear effects) and K (hydraulic conductivity). Sensitivity analysis shows that in general ?? and V0 have the lowest sensitivity and K usually has the highest. However, for very high K values the sensitivity to A may surpass the sensitivity to K. Oscillatory slug tests involve higher accelerations and velocities of the water column; thus, the pressure transducer responses are affected by these factors and the model response must be corrected to allow maximum accuracy for the analysis. The performance of multiple slug tests will allow some statistical measure of the experimental accuracy and of the reliability of the resulting aquifer parameters. ?? 2002 Elsevier Science B.V. All rights reserved.

  20. Sensitivity analysis of infectious disease models: methods, advances and their application

    PubMed Central

    Wu, Jianyong; Dhingra, Radhika; Gambhir, Manoj; Remais, Justin V.

    2013-01-01

    Sensitivity analysis (SA) can aid in identifying influential model parameters and optimizing model structure, yet infectious disease modelling has yet to adopt advanced SA techniques that are capable of providing considerable insights over traditional methods. We investigate five global SA methods—scatter plots, the Morris and Sobol’ methods, Latin hypercube sampling-partial rank correlation coefficient and the sensitivity heat map method—and detail their relative merits and pitfalls when applied to a microparasite (cholera) and macroparasite (schistosomaisis) transmission model. The methods investigated yielded similar results with respect to identifying influential parameters, but offered specific insights that vary by method. The classical methods differed in their ability to provide information on the quantitative relationship between parameters and model output, particularly over time. The heat map approach provides information about the group sensitivity of all model state variables, and the parameter sensitivity spectrum obtained using this method reveals the sensitivity of all state variables to each parameter over the course of the simulation period, especially valuable for expressing the dynamic sensitivity of a microparasite epidemic model to its parameters. A summary comparison is presented to aid infectious disease modellers in selecting appropriate methods, with the goal of improving model performance and design. PMID:23864497

  1. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  2. Fragrances and other materials in deodorants: search for potentially sensitizing molecules using combined GC-MS and structure activity relationship (SAR) analysis.

    PubMed

    Rastogi, S C; Lepoittevin, J P; Johansen, J D; Frosch, P J; Menné, T; Bruze, M; Dreier, B; Andersen, K E; White, I R

    1998-12-01

    Deodorants are one of the most frequently-used types of cosmetics and are a source of allergic contact dermatitis. Therefore, a gas chromatography - mass spectrometric analysis of 71 deodorants was performed for identification of fragrance and non-fragrance materials present in marketed deodorants. Futhermore, the sensitizing potential of these molecules was evaluated using structure activity relationships (SARs) analysis. This was based on the presence of 1 or more chemically reactive site(s), in the chemical structure, associated with sensitizing potential. Among the many different substances used to formulate cosmetic products (over 3500), 226 chemicals were identified in a sample of 71 deodorants. 84 molecules were found to contain at least 1 structural alert, and 70 to belong to, or be susceptible to being metabolized into, the chemical group of aldehydes, ketones and alpha,beta-unsaturated aldehydes, ketone or esters. The combination of GC-MS and SARs analysis could be helpful in the selection of substances for supplementary investigations regarding sensitizing properties. Thus, it may be a valuable tool in the management of contact allergy to deodorants and for producing new deodorants with decreased propensity to cause contact allergy.

  3. Simultaneous Solid Phase Extraction and Derivatization of Aliphatic Primary Amines Prior to Separation and UV-Absorbance Detection

    PubMed Central

    Felhofer, Jessica L.; Scida, Karen; Penick, Mark; Willis, Peter A.; Garcia, Carlos D.

    2013-01-01

    To overcome the problem of poor sensitivity of capillary electrophoresis-UV absorbance for the detection of aliphatic amines, a solid phase extraction and derivatization scheme was developed. This work demonstrates successful coupling of amines to a chromophore immobilized on a solid phase and subsequent cleavage and analysis. Although the analysis of many types of amines is relevant for myriad applications, this paper focuses on the derivatization and separation of amines with environmental relevance. This work aims to provide the foundations for future developments of an integrated sample preparation microreactor capable of performing simultaneous derivatization, preconcentration, and sample cleanup for sensitive analysis of primary amines. PMID:24054648

  4. Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor

    NASA Astrophysics Data System (ADS)

    Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar

    2018-05-01

    This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.

  5. Relative Performance of Academic Departments Using DEA with Sensitivity Analysis

    ERIC Educational Resources Information Center

    Tyagi, Preeti; Yadav, Shiv Prasad; Singh, S. P.

    2009-01-01

    The process of liberalization and globalization of Indian economy has brought new opportunities and challenges in all areas of human endeavor including education. Educational institutions have to adopt new strategies to make best use of the opportunities and counter the challenges. One of these challenges is how to assess the performance of…

  6. DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Pernigotti, D.; Belis, C. A.

    2018-05-01

    DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.

  7. Diagnostic performance of semi-quantitative and quantitative stress CMR perfusion analysis: a meta-analysis.

    PubMed

    van Dijk, R; van Assen, M; Vliegenthart, R; de Bock, G H; van der Harst, P; Oudkerk, M

    2017-11-27

    Stress cardiovascular magnetic resonance (CMR) perfusion imaging is a promising modality for the evaluation of coronary artery disease (CAD) due to high spatial resolution and absence of radiation. Semi-quantitative and quantitative analysis of CMR perfusion are based on signal-intensity curves produced during the first-pass of gadolinium contrast. Multiple semi-quantitative and quantitative parameters have been introduced. Diagnostic performance of these parameters varies extensively among studies and standardized protocols are lacking. This study aims to determine the diagnostic accuracy of semi- quantitative and quantitative CMR perfusion parameters, compared to multiple reference standards. Pubmed, WebOfScience, and Embase were systematically searched using predefined criteria (3272 articles). A check for duplicates was performed (1967 articles). Eligibility and relevance of the articles was determined by two reviewers using pre-defined criteria. The primary data extraction was performed independently by two researchers with the use of a predefined template. Differences in extracted data were resolved by discussion between the two researchers. The quality of the included studies was assessed using the 'Quality Assessment of Diagnostic Accuracy Studies Tool' (QUADAS-2). True positives, false positives, true negatives, and false negatives were subtracted/calculated from the articles. The principal summary measures used to assess diagnostic accuracy were sensitivity, specificity, andarea under the receiver operating curve (AUC). Data was pooled according to analysis territory, reference standard and perfusion parameter. Twenty-two articles were eligible based on the predefined study eligibility criteria. The pooled diagnostic accuracy for segment-, territory- and patient-based analyses showed good diagnostic performance with sensitivity of 0.88, 0.82, and 0.83, specificity of 0.72, 0.83, and 0.76 and AUC of 0.90, 0.84, and 0.87, respectively. In per territory analysis our results show similar diagnostic accuracy comparing anatomical (AUC 0.86(0.83-0.89)) and functional reference standards (AUC 0.88(0.84-0.90)). Only the per territory analysis sensitivity did not show significant heterogeneity. None of the groups showed signs of publication bias. The clinical value of semi-quantitative and quantitative CMR perfusion analysis remains uncertain due to extensive inter-study heterogeneity and large differences in CMR perfusion acquisition protocols, reference standards, and methods of assessment of myocardial perfusion parameters. For wide spread implementation, standardization of CMR perfusion techniques is essential. CRD42016040176 .

  8. Sensitivity analysis and uncertainty estimation in ash concentration simulations and tephra deposit daily forecasted at Mt. Etna, in Italy

    NASA Astrophysics Data System (ADS)

    Prestifilippo, Michele; Scollo, Simona; Tarantola, Stefano

    2015-04-01

    The uncertainty in volcanic ash forecasts may depend on our knowledge of the model input parameters and our capability to represent the dynamic of an incoming eruption. Forecasts help governments to reduce risks associated with volcanic eruptions and for this reason different kinds of analysis that help to understand the effect that each input parameter has on model outputs are necessary. We present an iterative approach based on the sequential combination of sensitivity analysis, parameter estimation procedure and Monte Carlo-based uncertainty analysis, applied to the lagrangian volcanic ash dispersal model PUFF. We modify the main input parameters as the total mass, the total grain-size distribution, the plume thickness, the shape of the eruption column, the sedimentation models and the diffusion coefficient, perform thousands of simulations and analyze the results. The study is carried out on two different Etna scenarios: the sub-plinian eruption of 22 July 1998 that formed an eruption column rising 12 km above sea level and lasted some minutes and the lava fountain eruption having features similar to the 2011-2013 events that produced eruption column high up to several kilometers above sea level and lasted some hours. Sensitivity analyses and uncertainty estimation results help us to address the measurements that volcanologists should perform during volcanic crisis to reduce the model uncertainty.

  9. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  10. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  11. Physical examination tests of the shoulder: a systematic review and meta-analysis of diagnostic test performance.

    PubMed

    Gismervik, Sigmund Ø; Drogset, Jon O; Granviken, Fredrik; Rø, Magne; Leivseth, Gunnar

    2017-01-25

    Physical examination tests of the shoulder (PETS) are clinical examination maneuvers designed to aid the assessment of shoulder complaints. Despite more than 180 PETS described in the literature, evidence of their validity and usefulness in diagnosing the shoulder is questioned. This meta-analysis aims to use diagnostic odds ratio (DOR) to evaluate how much PETS shift overall probability and to rank the test performance of single PETS in order to aid the clinician's choice of which tests to use. This study adheres to the principles outlined in the Cochrane guidelines and the PRISMA statement. A fixed effect model was used to assess the overall diagnostic validity of PETS by pooling DOR for different PETS with similar biomechanical rationale when possible. Single PETS were assessed and ranked by DOR. Clinical performance was assessed by sensitivity, specificity, accuracy and likelihood ratio. Six thousand nine-hundred abstracts and 202 full-text articles were assessed for eligibility; 20 articles were eligible and data from 11 articles could be included in the meta-analysis. All PETS for SLAP (superior labral anterior posterior) lesions pooled gave a DOR of 1.38 [1.13, 1.69]. The Supraspinatus test for any full thickness rotator cuff tear obtained the highest DOR of 9.24 (sensitivity was 0.74, specificity 0.77). Compression-Rotation test obtained the highest DOR (6.36) among single PETS for SLAP lesions (sensitivity 0.43, specificity 0.89) and Hawkins test obtained the highest DOR (2.86) for impingement syndrome (sensitivity 0.58, specificity 0.67). No single PETS showed superior clinical test performance. The clinical performance of single PETS is limited. However, when the different PETS for SLAP lesions were pooled, we found a statistical significant change in post-test probability indicating an overall statistical validity. We suggest that clinicians choose their PETS among those with the highest pooled DOR and to assess validity to their own specific clinical settings, review the inclusion criteria of the included primary studies. We further propose that future studies on the validity of PETS use randomized research designs rather than the accuracy design relying less on well-established gold standard reference tests and efficient treatment options.

  12. Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM

    NASA Astrophysics Data System (ADS)

    Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz

    2018-03-01

    The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.

  13. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  14. Mass spectrometric detection of siRNA in plasma samples for doping control purposes.

    PubMed

    Kohler, Maxie; Thomas, Andreas; Walpurgis, Katja; Schänzer, Wilhelm; Thevis, Mario

    2010-10-01

    Small interfering ribonucleic acid (siRNA) molecules can effect the expression of any gene by inducing the degradation of mRNA. Therefore, these molecules can be of interest for illicit performance enhancement in sports by affecting different metabolic pathways. An example of an efficient performance-enhancing gene knockdown is the myostatin gene that regulates muscle growth. This study was carried out to provide a tool for the mass spectrometric detection of modified and unmodified siRNA from plasma samples. The oligonucleotides are purified by centrifugal filtration and the use of an miRNA purification kit, followed by flow-injection analysis using an Exactive mass spectrometer to yield the accurate masses of the sense and antisense strands. Although chromatography and sensitive mass spectrometric analysis of oligonucleotides are still challenging, a method was developed and validated that has adequate sensitivity (limit of detection 0.25-1 nmol mL(-1)) and performance (precision 11-21%, recovery 23-67%) for typical antisense oligonucleotides currently used in clinical studies.

  15. An analysis of parameter sensitivities of preference-inspired co-evolutionary algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Rui; Mansor, Maszatul M.; Purshouse, Robin C.; Fleming, Peter J.

    2015-10-01

    Many-objective optimisation problems remain challenging for many state-of-the-art multi-objective evolutionary algorithms. Preference-inspired co-evolutionary algorithms (PICEAs) which co-evolve the usual population of candidate solutions with a family of decision-maker preferences during the search have been demonstrated to be effective on such problems. However, it is unknown whether PICEAs are robust with respect to the parameter settings. This study aims to address this question. First, a global sensitivity analysis method - the Sobol' variance decomposition method - is employed to determine the relative importance of the parameters controlling the performance of PICEAs. Experimental results show that the performance of PICEAs is controlled for the most part by the number of function evaluations. Next, we investigate the effect of key parameters identified from the Sobol' test and the genetic operators employed in PICEAs. Experimental results show improved performance of the PICEAs as more preferences are co-evolved. Additionally, some suggestions for genetic operator settings are provided for non-expert users.

  16. Modeling, design, packing and experimental analysis of liquid-phase shear-horizontal surface acoustic wave sensors

    NASA Astrophysics Data System (ADS)

    Pollard, Thomas B

    Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity using uniform-electrode and shear-horizontal mode configurations on potassium-niobate, langasite, and quartz substrates. Optimum configurations are determined yielding maximum sensitivity. Results show mode propagation-loss and sensitivity to viscosity are correlated by a factor independent of substrate material. The analysis is useful for designing devices meeting sensitivity and signal level requirements. A novel, rapid and precise microfluidic chamber alignment/bonding method was developed for SAW platforms. The package is shown to have little effect on device performance and permits simple macrofluidic interfacing. Lastly, prototypes were designed, fabricated, and tested for viscosity and biosensor applications; results show ability to detect as low as 1% glycerol in water and surface-bound DNA crosslinking.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep analysis was performed over a number of a different battery sizes, charging powers, and charging stations. The net present value was calculated for each vehicle design and provided the basis for the design evaluation. In all cases, given present day economic assumptions, the conventional bus achieved the lowest net present value while the optimal plug-in hybrid electric bus scenario reached lower lifetime costs than themore » hybrid electric bus. The study also performed parameter sensitivity analysis under low market potential assumptions and high market potential assumptions. The net present value of plug-in hybrid electric bus is close to that of conventional bus.« less

  18. Characterizing a porous road pavement using surface impedance measurement: a guided numerical inversion procedure.

    PubMed

    Benoit, Gaëlle; Heinkélé, Christophe; Gourdon, Emmanuel

    2013-12-01

    This paper deals with a numerical procedure to identify the acoustical parameters of road pavement from surface impedance measurements. This procedure comprises three steps. First, a suitable equivalent fluid model for the acoustical properties porous media is chosen, the variation ranges for the model parameters are set, and a sensitivity analysis for this model is performed. Second, this model is used in the parameter inversion process, which is performed with simulated annealing in a selected frequency range. Third, the sensitivity analysis and inversion process are repeated to estimate each parameter in turn. This approach is tested on data obtained for porous bituminous concrete and using the Zwikker and Kosten equivalent fluid model. This work provides a good foundation for the development of non-destructive in situ methods for the acoustical characterization of road pavements.

  19. Use of a Smartphone as a Colorimetric Analyzer in Paper-based Devices for Sensitive and Selective Determination of Mercury in Water Samples.

    PubMed

    Jarujamrus, Purim; Meelapsom, Rattapol; Pencharee, Somkid; Obma, Apinya; Amatatongchai, Maliwan; Ditcharoen, Nadh; Chairam, Sanoe; Tamuang, Suparb

    2018-01-01

    A smartphone application, called CAnal, was developed as a colorimetric analyzer in paper-based devices for sensitive and selective determination of mercury(II) in water samples. Measurement on the double layer of a microfluidic paper-based analytical device (μPAD) fabricated by alkyl ketene dimer (AKD)-inkjet printing technique with special design doped with unmodified silver nanoparticles (AgNPs) onto the detection zones was performed by monitoring the gray intensity in the blue channel of AgNPs, which disintegrated when exposed to mercury(II) on μPAD. Under the optimized conditions, the developed approach showed high sensitivity, low limit of detection (0.003 mg L -1 , 3SD blank/slope of the calibration curve), small sample volume uptake (two times of 2 μL), and short analysis time. The linearity range of this technique ranged from 0.01 to 10 mg L -1 (r 2 = 0.993). Furthermore, practical analysis of various water samples was also demonstrated to have acceptable performance that was in agreement with the data from cold vapor atomic absorption spectrophotometry (CV-AAS), a conventional method. The proposed technique allows for a rapid, simple (instant report of the final mercury(II) concentration in water samples via smartphone display), sensitive, selective, and on-site analysis with high sample throughput (48 samples h -1 , n = 3) of trace mercury(II) in water samples, which is suitable for end users who are unskilled in analyzing mercury(II) in water samples.

  20. Standardized Index of Shape (DCE-MRI) and Standardized Uptake Value (PET/CT): Two quantitative approaches to discriminate chemo-radiotherapy locally advanced rectal cancer responders under a functional profile

    PubMed Central

    Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo

    2017-01-01

    Purpose To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). Methods 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. Results 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. Conclusions CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR. PMID:28042958

  1. Standardized Index of Shape (DCE-MRI) and Standardized Uptake Value (PET/CT): Two quantitative approaches to discriminate chemo-radiotherapy locally advanced rectal cancer responders under a functional profile.

    PubMed

    Petrillo, Antonella; Fusco, Roberta; Petrillo, Mario; Granata, Vincenza; Delrio, Paolo; Bianco, Francesco; Pecori, Biagio; Botti, Gerardo; Tatangelo, Fabiana; Caracò, Corradina; Aloj, Luigi; Avallone, Antonio; Lastoria, Secondo

    2017-01-31

    To investigate dynamic contrast enhanced-MRI (DCE-MRI) in the preoperative chemo-radiotherapy (CRT) assessment for locally advanced rectal cancer (LARC) compared to18F-fluorodeoxyglucose positron emission tomography/computed tomography (18F-FDG PET/CT). 75 consecutive patients with LARC were enrolled in a prospective study. DCE-MRI analysis was performed measuring SIS: linear combination of percentage change (Δ) of maximum signal difference (MSD) and wash-out slope (WOS). 18F-FDG PET/CT analysis was performed using SUV maximum (SUVmax). Tumor regression grade (TRG) were estimated after surgery. Non-parametric tests, receiver operating characteristic were evaluated. 55 patients (TRG1-2) were classified as responders while 20 subjects as non responders. ΔSIS reached sensitivity of 93%, specificity of 80% and accuracy of 89% (cut-off 6%) to differentiate responders by non responders, sensitivity of 93%, specificity of 69% and accuracy of 79% (cut-off 30%) to identify pathological complete response (pCR). Therapy assessment via ΔSUVmax reached sensitivity of 67%, specificity of 75% and accuracy of 70% (cut-off 60%) to differentiate responders by non responders and sensitivity of 80%, specificity of 31% and accuracy of 51% (cut-off 44%) to identify pCR. CRT response assessment by DCE-MRI analysis shows a higher predictive ability than 18F-FDG PET/CT in LARC patients allowing to better discriminate significant and pCR.

  2. FDG-PET/CT for treatment response assessment in head and neck squamous cell carcinoma: a systematic review and meta-analysis of diagnostic performance.

    PubMed

    Helsen, Nils; Van den Wyngaert, Tim; Carp, Laurens; Stroobants, Sigrid

    2018-06-01

    18-fluorodeoxyglucose positron emission tomography combined with computed tomography (FDG-PET/CT) is increasingly used to evaluate treatment response in head and neck squamous cell carcinoma (HNSCC). This analysis assessed the diagnostic value of FDG-PET/CT in detecting nodal disease within 6 months after treatment, considering patient and disease characteristics. A systematic review was performed using the MEDLINE and Web of Knowledge databases. The results were pooled using a bivariate random effects model of the sensitivity and specificity. Out of 22 identified studies, a meta-analysis of 20 studies (1293 patients) was performed. The pooled estimates of sensitivity, specificity and diagnostic odds ratio (with 95% CI) were 85% (76-91%), 93% (89-96%) and 76 (35-165), respectively. With the prevalence set at 10%, the positive and negative predictive values were 58% and 98%. There was significant heterogeneity between the trials (p < 0.001). HPV positive tumors were associated with lower sensitivity (75% vs 89%; p = 0.01) and specificity (87% vs 95%; p < 0.005). FDG-PET/CT within 6 months after (chemo)radiotherapy in HNSCC patients is a reliable method for ruling out residual/recurrent nodal disease and obviates the need for therapeutic intervention. However, FDG-PET/CT may be less reliable in HPV positive tumors and the optimal surveillance strategy remains to be determined.

  3. Sensitivity Analysis of Flutter Response of a Wing Incorporating Finite-Span Corrections

    NASA Technical Reports Server (NTRS)

    Issac, Jason Cherian; Kapania, Rakesh K.; Barthelemy, Jean-Francois M.

    1994-01-01

    Flutter analysis of a wing is performed in compressible flow using state-space representation of the unsteady aerodynamic behavior. Three different expressions are used to incorporate corrections due to the finite-span effects of the wing in estimating the lift-curve slope. The structural formulation is based on a Rayleigh-Pitz technique with Chebyshev polynomials used for the wing deflections. The aeroelastic equations are solved as an eigen-value problem to determine the flutter speed of the wing. The flutter speeds are found to be higher in these cases, when compared to that obtained without accounting for the finite-span effects. The derivatives of the flutter speed with respect to the shape parameters, namely: aspect ratio, area, taper ratio and sweep angle, are calculated analytically. The shape sensitivity derivatives give a linear approximation to the flutter speed curves over a range of values of the shape parameter which is perturbed. Flutter and sensitivity calculations are performed on a wing using a lifting-surface unsteady aerodynamic theory using modules from a system of programs called FAST.

  4. Comparative Diagnostic Performance of Ultrasonography and 99mTc-Sestamibi Scintigraphy for Parathyroid Adenoma in Primary Hyperparathyroidism; Systematic Review and Meta- Analysis

    PubMed

    Nafisi Moghadam, Reza; Amlelshahbaz, Amir Pasha; Namiranian, Nasim; Sobhan-Ardekani, Mohammad; Emami-Meybodi, Mahmood; Dehghan, Ali; Rahmanian, Masoud; Razavi-Ratki, Seid Kazem

    2017-12-28

    Objective: Ultrasonography (US) and parathyroid scintigraphy (PS) with 99mTc-MIBI are common methods for preoperative localization of parathyroid adenomas but there discrepancies exist with regard to diagnostic accuracy. The aim of the study was to compare PS and US for localization of parathyroid adenoma with a systematic review and meta-analysis of the literature. Methods: Pub Med, Scopus (EMbase), Web of Science and the reference lists of all included studies were searched up to 1st January 2016. The search strategy was according PICO characteristics. Heterogeneity between the studies was accounted by P < 0.1. Point estimates were pooled estimate of sensitivity, specificity and positive predictive value of SPECT and ultrasonography with 99% confidence intervals (CIs) by pooling available data. Data analysis was performed using Meta-DiSc software (version 1.4). Results: Among 188 studies and after deletion of duplicated studies (75), a total of 113 titles and abstracts were studied. From these, 12 studies were selected. The meta-analysis determined a pooled sensitivity for scintigraphy of 83% [99% confidence interval (CI) 96.358 -97.412] and for ultra-sonography of 80% [99% confidence interval (CI) 76-83]. Similar results for specificity were also obtained for both approache. Conclusion: According this meta- analysis, there were no significant differences between the two methods in terms of sensitivity and specificity. There were overlaps in 99% confidence intervals. Also features of the two methods are similar. Creative Commons Attribution License

  5. Diagnostic value of stool DNA testing for multiple markers of colorectal cancer and advanced adenoma: a meta-analysis.

    PubMed

    Yang, Hua; Xia, Bing-Qing; Jiang, Bo; Wang, Guozhen; Yang, Yi-Peng; Chen, Hao; Li, Bing-Sheng; Xu, An-Gao; Huang, Yun-Bo; Wang, Xin-Ying

    2013-08-01

    The diagnostic value of stool DNA (sDNA) testing for colorectal neoplasms remains controversial. To compensate for the lack of large-scale unbiased population studies, a meta-analysis was performed to evaluate the diagnostic value of sDNA testing for multiple markers of colorectal cancer (CRC) and advanced adenoma. The PubMed, Science Direct, Biosis Review, Cochrane Library and Embase databases were systematically searched in January 2012 without time restriction. Meta-analysis was performed using a random-effects model using sensitivity, specificity, diagnostic OR (DOR), summary ROC curves, area under the curve (AUC), and 95% CIs as effect measures. Heterogeneity was measured using the χ(2) test and Q statistic; subgroup analysis was also conducted. A total of 20 studies comprising 5876 individuals were eligible. There was no heterogeneity for CRC, but adenoma and advanced adenoma harboured considerable heterogeneity influenced by risk classification and various detection markers. Stratification analysis according to risk classification showed that multiple markers had a high DOR for the high-risk subgroups of both CRC (sensitivity 0.759 [95% CI 0.711 to 0.804]; specificity 0.883 [95% CI 0.846 to 0.913]; AUC 0.906) and advanced adenoma (sensitivity 0.683 [95% CI 0.584 to 0.771]; specificity 0.918 [95% CI 0.866 to 0.954]; AUC 0.946) but not for the average-risk subgroups of either. In the methylation subgroup, sDNA testing had significantly higher DOR for CRC (sensitivity 0.753 [95% CI 0.685 to 0.812]; specificity 0.913 [95% CI 0.860 to 0.950]; AUC 0.918) and advanced adenoma (sensitivity 0.623 [95% CI 0.527 to 0.712]; specificity 0.926 [95% CI 0.882 to 0.958]; AUC 0.910) compared with the mutation subgroup. There was no significant heterogeneity among studies for subgroup analysis. sDNA testing for multiple markers had strong diagnostic significance for CRC and advanced adenoma in high-risk subjects. Methylation makers had more diagnostic value than mutation markers.

  6. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  7. Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars

    PubMed Central

    Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.

    2005-01-01

    The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10–600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment. PMID:15657130

  8. Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars.

    PubMed

    Skelley, Alison M; Scherer, James R; Aubrey, Andrew D; Grover, William H; Ivester, Robin H C; Ehrenfreund, Pascale; Grunthaner, Frank J; Bada, Jeffrey L; Mathies, Richard A

    2005-01-25

    The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10-600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment.

  9. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  10. Nested polymerase chain reaction on blood clots for gene encoding 56 kDa antigen and serology for the diagnosis of scrub typhus.

    PubMed

    Prakash, J A J; Kavitha, M L; Mathai, E

    2011-01-01

    Scrub typhus is a zoonotic illness endemic in the Asia-Pacific region. Early diagnosis and appropriate management contribute significantly to preventing adverse outcomes including mortality. Serology is widely used for diagnosing scrub typhus. Recent reports suggest that polymerase chain reaction (PCR) could be a rapid and reliable alternative. This study assessed the utility of these tests for scrub typhus diagnosis. Nested PCR to detect the 56 kDa antigen gene of O. tsutsugamushi was performed on blood clots from 87 individuals with clinically suspected scrub typhus. Weil-Felix test and scrub typhus IgM ELISA were performed on serum samples from the same patients. As a gold standard reference test was not available, latent class analysis (LCA) was used to assess the performance of the three tests. The LCA analysis showed the sensitivity of Weil-Felix test, IgM ELISA and PCR to be 59%, 100% and 58% respectively. The specificity of ELISA was only 73%, whereas those of the Weil-Felix test and PCR were 94% and 100% respectively. Nested PCR using blood clots while specific, lacked sensitivity as compared to IgM ELISA. In resource-poor settings Weil-Felix test still remains valuable despite its moderate sensitivity.

  11. Phosphorus component in AnnAGNPS

    USGS Publications Warehouse

    Yuan, Y.; Bingner, R.L.; Theurer, F.D.; Rebich, R.A.; Moore, P.A.

    2005-01-01

    The USDA Annualized Agricultural Non-Point Source Pollution model (AnnAGNPS) has been developed to aid in evaluation of watershed response to agricultural management practices. Previous studies have demonstrated the capability of the model to simulate runoff and sediment, but not phosphorus (P). The main purpose of this article is to evaluate the performance of AnnAGNPS on P simulation using comparisons with measurements from the Deep Hollow watershed of the Mississippi Delta Management Systems Evaluation Area (MDMSEA) project. A sensitivity analysis was performed to identify input parameters whose impact is the greatest on P yields. Sensitivity analysis results indicate that the most sensitive variables of those selected are initial soil P contents, P application rate, and plant P uptake. AnnAGNPS simulations of dissolved P yield do not agree well with observed dissolved P yield (Nash-Sutcliffe coefficient of efficiency of 0.34, R2 of 0.51, and slope of 0.24); however, AnnAGNPS simulations of total P yield agree well with observed total P yield (Nash-Sutcliffe coefficient of efficiency of 0.85, R2 of 0.88, and slope of 0.83). The difference in dissolved P yield may be attributed to limitations in model simulation of P processes. Uncertainties in input parameter selections also affect the model's performance.

  12. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  13. Effects of reward and punishment on task performance, mood and autonomic nervous function, and the interaction with personality.

    PubMed

    Sakuragi, Sokichi; Sugiyama, Yoshiki

    2009-06-01

    The effects of reward and punishment are different, and there are individual differences in sensitivity to reward and punishment. The purpose of this study was to investigate the effects of reward and punishment on task performance, mood, and autonomic nervous function, along with the interaction with personality. Twenty-one healthy female subjects volunteered for the experiment. The task performance was evaluated by required time and total errors while performing a Wisconsin Card Sorting Test. We assessed their personalities using the Minnesota Multiphasic Personality Inventory (MMPI) questionnaire, and mood states by a profile of mood states. Autonomic nervous function was estimated by a spectral analysis of heart rate variability, baroreflex sensitivity, and blood pressure. Repeated measures analysis of variance (ANOVA) revealed significant interaction of condition x time course on mood and autonomic nervous activity, which would indicate a less stressed state under the rewarding condition, but revealed no significant interaction of condition x time course on the task performance. The interactions with personality were further analyzed by repeated measures ANOVA applying the clinical scales of MMPI as independent variables, and significant interactions of condition x time course x Pt (psychasthenia) on task performance, mood, and blood pressure, were revealed. That is, the high Pt group, whose members tend to be sensitive and prone to worry, showed gradual improvement of task performance under the punishing situation with slight increase in systolic blood pressure, while showed no improvement under the rewarding situation with fatigue sense attenuation. In contrast, the low Pt group, whose members tend to be adaptive and self-confident, showed gradual improvement under the rewarding situation. Therefore, we should carefully choose the strategy of reward or punishment, considering the interaction with personality as well as the context in which it is given.

  14. High-throughput microfluidic single-cell digital polymerase chain reaction.

    PubMed

    White, A K; Heyries, K A; Doolin, C; Vaninsberghe, M; Hansen, C L

    2013-08-06

    Here we present an integrated microfluidic device for the high-throughput digital polymerase chain reaction (dPCR) analysis of single cells. This device allows for the parallel processing of single cells and executes all steps of analysis, including cell capture, washing, lysis, reverse transcription, and dPCR analysis. The cDNA from each single cell is distributed into a dedicated dPCR array consisting of 1020 chambers, each having a volume of 25 pL, using surface-tension-based sample partitioning. The high density of this dPCR format (118,900 chambers/cm(2)) allows the analysis of 200 single cells per run, for a total of 204,000 PCR reactions using a device footprint of 10 cm(2). Experiments using RNA dilutions show this device achieves shot-noise-limited performance in quantifying single molecules, with a dynamic range of 10(4). We performed over 1200 single-cell measurements, demonstrating the use of this platform in the absolute quantification of both high- and low-abundance mRNA transcripts, as well as micro-RNAs that are not easily measured using alternative hybridization methods. We further apply the specificity and sensitivity of single-cell dPCR to performing measurements of RNA editing events in single cells. High-throughput dPCR provides a new tool in the arsenal of single-cell analysis methods, with a unique combination of speed, precision, sensitivity, and specificity. We anticipate this approach will enable new studies where high-performance single-cell measurements are essential, including the analysis of transcriptional noise, allelic imbalance, and RNA processing.

  15. The Scaffold Attachment Factor SAFB1: A New Player in G2/M Checkpoint Control

    DTIC Science & Technology

    2007-04-01

    RNA was obtained from locally advanced breast tumors in 24 patients before they underwent four cycles of neoadjuvant docetaxel treatment . Gene...expression analysis was performed and correlated to the treatment response to determine genes that are differentially expressed in docetaxel-sensitive...decreased sensitivity to drugs, depending on the chemotherapeutic agent used 2) Association of SAFB1 loss with resistance to docetaxel treatment , both

  16. A new method to make 2-D wear measurements less sensitive to projection differences of cemented THAs.

    PubMed

    The, Bertram; Flivik, Gunnar; Diercks, Ron L; Verdonschot, Nico

    2008-03-01

    Wear curves from individual patients often show unexplained irregular wear curves or impossible values (negative wear). We postulated errors of two-dimensional wear measurements are mainly the result of radiographic projection differences. We tested a new method that makes two-dimensional wear measurements less sensitive for radiograph projection differences of cemented THAs. The measurement errors that occur when radiographically projecting a three-dimensional THA were modeled. Based on the model, we developed a method to reduce the errors, thus approximating three-dimensional linear wear values, which are less sensitive for projection differences. An error analysis was performed by virtually simulating 144 wear measurements under varying conditions with and without application of the correction: the mean absolute error was reduced from 1.8 mm (range, 0-4.51 mm) to 0.11 mm (range, 0-0.27 mm). For clinical validation, radiostereometric analysis was performed on 47 patients to determine the true wear at 1, 2, and 5 years. Subsequently, wear was measured on conventional radiographs with and without the correction: the overall occurrence of errors greater than 0.2 mm was reduced from 35% to 15%. Wear measurements are less sensitive to differences in two-dimensional projection of the THA when using the correction method.

  17. Toward earlier detection of choroidal neovascularization secondary to age-related macular degeneration: multicenter evaluation of a preferential hyperacuity perimeter designed as a home device.

    PubMed

    Loewenstein, Anat; Ferencz, Joseph R; Lang, Yaron; Yeshurun, Itamar; Pollack, Ayala; Siegal, Ruth; Lifshitz, Tova; Karp, Joseph; Roth, Daniel; Bronner, Guri; Brown, Justin; Mansour, Sam; Friedman, Scott; Michels, Mark; Johnston, Richards; Rapp, Moshe; Havilio, Moshe; Rafaeli, Omer; Manor, Yair

    2010-01-01

    The primary purpose of this study was to evaluate the ability of a home device preferential hyperacuity perimeter to discriminate between patients with choroidal neovascularization (CNV) and intermediate age-related macular degeneration (AMD), and the secondary purpose was to investigate the dependence of sensitivity on lesion characteristics. All participants were tested with the home device in an unsupervised mode. The first part of this work was retrospective using tests performed by patients with intermediate AMD and newly diagnosed CNV. In the second part, the classifier was prospectively challenged with tests performed by patients with intermediate AMD and newly diagnosed CNV. The dependence of sensitivity on lesion characteristics was estimated with tests performed by patients with CNV of both parts. In 66 eyes with CNV and 65 eyes with intermediate AMD, both sensitivity and specificity were 0.85. In the retrospective part (34 CNV and 43 intermediate AMD), sensitivity and specificity were 0.85 +/- 0.12 (95% confidence interval) and 0.84 +/- 0.11 (95% confidence interval), respectively. In the prospective part (32 CNV and 22 intermediate AMD), sensitivity and specificity were 0.84 +/- 0.13 (95% confidence interval) and 0.86 +/- 0.14 (95% confidence interval), respectively. Chi-square analysis showed no dependence of sensitivity on type (P = 0.44), location (P = 0.243), or size (P = 0.73) of the CNV lesions. A home device preferential hyperacuity perimeter has good sensitivity and specificity in discriminating between patients with newly diagnosed CNV and intermediate AMD. Sensitivity is not dependent on lesion characteristics.

  18. Performance of Ultrasound in the Diagnosis of Gout in a Multicenter Study: Comparison With Monosodium Urate Monohydrate Crystal Analysis as the Gold Standard.

    PubMed

    Ogdie, Alexis; Taylor, William J; Neogi, Tuhina; Fransen, Jaap; Jansen, Tim L; Schumacher, H Ralph; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Cagnotto, Giovanni; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; Lima Gomes Ochtrop, Manuella; Janssen, Matthijs; Chen, Jiunn-Horng; Slot, Ole; Lazovskis, Juris; White, Douglas; Cimmino, Marco A; Uhlig, Till; Dalbeth, Nicola

    2017-02-01

    To examine the performance of ultrasound (US) for the diagnosis of gout using the presence of monosodium urate monohydrate (MSU) crystals as the gold standard. We analyzed data from the Study for Updated Gout Classification Criteria (SUGAR), a large, multicenter observational cross-sectional study of consecutive subjects with at least 1 swollen joint who conceivably may have gout. All subjects underwent arthrocentesis; cases were subjects with confirmed MSU crystals. Rheumatologists or radiologists who were blinded with regard to the results of the MSU crystal analysis performed US on 1 or more clinically affected joints. US findings of interest were double contour sign, tophus, and snowstorm appearance. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Multivariable logistic regression models were used to examine factors associated with positive US results among subjects with gout. US was performed in 824 subjects (416 cases and 408 controls). The sensitivity, specificity, PPV, and NPV for the presence of any 1 of the features were 76.9%, 84.3%, 83.3%, and 78.2%, respectively. Sensitivity was higher among subjects with a disease duration of ≥2 years and among subjects with subcutaneous nodules on examination (suspected tophus). Associations with a positive US finding included suspected clinical tophus (odds ratio [OR] 4.77 [95% confidence interval (95% CI) 2.23-10.21]), any abnormality on plain radiography (OR 4.68 [95% CI 2.68-8.17]), and serum urate level (OR 1.31 [95% CI 1.06-1.62]). US features of MSU crystal deposition had high specificity and high PPV but more limited sensitivity for early gout. The specificity remained high in subjects with early disease and without clinical signs of tophi. © 2016, American College of Rheumatology.

  19. Performance of Ultrasound in the Diagnosis of Gout in a Multi-Center Study: Comparison with Monosodium Urate Crystal Analysis as the Gold Standard

    PubMed Central

    Ogdie, Alexis; Taylor, William J; Neogi, Tuhina; Fransen, Jaap; Jansen, Tim L; Schumacher, H. Ralph; Louthrenoo, Worawit; Vazquez-Mellado, Janitzia; Eliseev, Maxim; McCarthy, Geraldine; Stamp, Lisa K.; Perez-Ruiz, Fernando; Sivera, Francisca; Ea, Hang-Korng; Gerritsen, Martijn; Cagnotto, Giovanni; Cavagna, Lorenzo; Lin, Chingtsai; Chou, Yin-Yi; Tausche, Anne-Kathrin; Ochtrop, Manuella Lima Gomes; Janssen, Matthijs; Chen, Jiunn-Horng; Slot, Ole; Lazovskis, Juris; White, Douglas; Cimmino, Marco A.; Uhlig, Till; Dalbeth, Nicola

    2017-01-01

    Objectives To examine the performance of ultrasound for the diagnosis of gout using presence of monosodium urate (MSU) crystals as the gold standard. Methods We analyzed data from the Study for Updated Gout Classification Criteria (SUGAR), a large, multi-center observational cross-sectional study of consecutive subjects with at least one swollen joint who conceivably may have gout. All subjects underwent arthrocentesis; cases were subjects with MSU crystal confirmation. Rheumatologists or radiologists, blinded to the results of the MSU crystal analysis, performed ultrasound on one or more clinically affected joints. Ultrasound findings of interest were: double contour sign (DCS), tophus, and ‘snowstorm’ appearance. Sensitivity, specificity, positive and negative predictive values (PPV and NPV) were calculated. Multivariable logistic regression models were used to examine factors associated with positive ultrasound results among subjects with gout. Results Ultrasound was performed in 824 subjects (416 cases and 408 controls). The sensitivity, specificity, PPV and NPV for the presence of any one of the features were 76.9%, 84.3%, 83.3% and 78.1% respectively. Sensitivity was higher among subjects with disease ≥2 years duration and among subjects with subcutaneous nodules on exam (suspected tophus). Associations with a positive ultrasound finding included suspected clinical tophus (odds ratio 4.77; 95% CI 2.23–10.21), any abnormal plain film radiograph (4.68; 2.68–8.17) and serum urate (1.31; 1.06–1.62). Conclusions Ultrasound features of MSU crystal deposition had high specificity and high positive predictive value but more limited sensitivity for early gout. The specificity remained high in subjects with early disease and without clinical signs of tophi. PMID:27748084

  20. Performance of two strategies for urgent ANCA and anti-GBM analysis in vasculitis.

    PubMed

    de Joode, Anoek A E; Roozendaal, Caroline; van der Leij, Marcel J; Bungener, Laura B; Sanders, Jan Stephan F; Stegeman, Coen A

    2014-02-01

    In anti-neutrophil cytoplasmic antibodies (ANCA) associated small vessel vasculitis (AAV), rapid testing for ANCA and anti-glomerular basement membrane (GBM) antibodies may be beneficial for therapeutic purpose. We analysed the diagnostic performance of two rapid ANCA and anti-GBM test methods in 260 patients with suspected AAV. Between January 2004 and November 2010, we analysed 260 samples by qualitative Dotblot (Biomedical Diagnostics); retrospective analysis followed with directly coated highly sensitive automated Phadia ELiA and ELiA anti-GBM. Results were related to the final clinical diagnosis and compared with routine capture ELISA. Seventy-four patients had a final diagnosis of AAV (n=62) or anti-GBM disease (n=12). Both Dotblot and ELiA detected all 12 cases of anti-GBM disease; 2 false positive results were found. Dotblot detected ANCA in 56 of 62 AAV patients (sensitivity 90%, NPV 97%), and showed 5 false positives (specificity 97%, PPV 90%). The Phadia ELiA anti-PR3(s) or anti-MPO(s) was positive in 57 of 62 AAV patients (sensitivity 92%, NPV 97%), and had 5 false positives (specificity 97%, PPV 88%). Routine capture ELISA was equally accurate (sensitivity 94%, specificity 97%, PPV 88%, NPV 98%). The Dotblot and Phadia ELiA on anti-GBM, anti-PR3(s) and anti-MPO(s) performed excellently; results were almost identical to routine ELISA. When suspicion of AAV or anti-GBM disease is high and diagnosis is urgently needed, both tests are very powerful for rapid serological diagnosis. Further studies have to confirm the test performances in samples routinely presented for ANCA testing and in follow-up of positive patients. Copyright © 2013 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  1. The performance of blood pressure-to-height ratio as a screening measure for identifying children and adolescents with hypertension: a meta-analysis.

    PubMed

    Ma, Chunming; Liu, Yue; Lu, Qiang; Lu, Na; Liu, Xiaoli; Tian, Yiming; Wang, Rui; Yin, Fuzai

    2016-02-01

    The blood pressure-to-height ratio (BPHR) has been shown to be an accurate index for screening hypertension in children and adolescents. The aim of the present study was to perform a meta-analysis to assess the performance of BPHR for the assessment of hypertension. Electronic and manual searches were performed to identify studies of the BPHR. After methodological quality assessment and data extraction, pooled estimates of the sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio, area under the receiver operating characteristic curve and summary receiver operating characteristics were assessed systematically. The extent of heterogeneity for it was assessed. Six studies were identified for analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio and diagnostic odds ratio values of BPHR, for assessment of hypertension, were 96% [95% confidence interval (CI)=0.95-0.97], 90% (95% CI=0.90-0.91), 10.68 (95% CI=8.03-14.21), 0.04 (95% CI=0.03-0.07) and 247.82 (95% CI=114.50-536.34), respectively. The area under the receiver operating characteristic curve was 0.9472. The BPHR had higher diagnostic accuracies for identifying hypertension in children and adolescents.

  2. Evaluating language environment analysis system performance for Chinese: a pilot study in Shanghai.

    PubMed

    Gilkerson, Jill; Zhang, Yiwen; Xu, Dongxin; Richards, Jeffrey A; Xu, Xiaojuan; Jiang, Fan; Harnsberger, James; Topping, Keith

    2015-04-01

    The purpose of this study was to evaluate performance of the Language Environment Analysis (LENA) automated language-analysis system for the Chinese Shanghai dialect and Mandarin (SDM) languages. Volunteer parents of 22 children aged 3-23 months were recruited in Shanghai. Families provided daylong in-home audio recordings using LENA. A native speaker listened to 15 min of randomly selected audio samples per family to label speaker regions and provide Chinese character and SDM word counts for adult speakers. LENA segment labeling and counts were compared with rater-based values. LENA demonstrated good sensitivity in identifying adult and child; this sensitivity was comparable to that of American English validation samples. Precision was strong for adults but less so for children. LENA adult word count correlated strongly with both Chinese characters and SDM word counts. LENA conversational turn counts correlated similarly with rater-based counts after the exclusion of three unusual samples. Performance related to some degree to child age. LENA adult word count and conversational turn provided reasonably accurate estimates for SDM over the age range tested. Theoretical and practical considerations regarding LENA performance in non-English languages are discussed. Despite the pilot nature and other limitations of the study, results are promising for broader cross-linguistic applications.

  3. Semi-micro high-performance liquid chromatographic analysis of tiropramide in human plasma using column-switching.

    PubMed

    Baek, Soo Kyoung; Lee, Seung Seok; Park, Eun Jeon; Sohn, Dong Hwan; Lee, Hye Suk

    2003-02-05

    A rapid and sensitive column-switching semi-micro high-performance liquid chromatography method was developed for the direct analysis of tiropramide in human plasma. The plasma sample (100 microl) was directly injected onto Capcell Pak MF Ph-1 precolumn where deproteinization and analyte fractionation occurred. Tiropramide was then eluted into an enrichment column (Capcell Pak UG C(18)) using acetonitrile-potassium phosphate (pH 7.0, 50 mM) (12:88, v/v) and was analyzed on a semi-micro C(18) analytical column using acetonitrile-potassium phosphate (pH 7.0, 10 mM) (50:50, v/v). The method showed excellent sensitivity (limit of quantification 5 ng/ml), and good precision (C.V.

  4. Analysis of imazaquin in soybeans by solid-phase extraction and high-performance liquid chromatography.

    PubMed

    Guo, C; Hu, J-Y; Chen, X-Y; Li, J-Z

    2008-02-01

    An analytical method for the determination imazaquin residues in soybeans was developed. The developed liquid/liquid partition and strong anion exchange solid-phase extraction procedures provide the effective cleanup, removing the greatest number of sample matrix interferences. By optimizing mobile-phase pH water/acetonitrile conditions with phosphoric acid, using a C-18 reverse-phase chromatographic column and employing ultraviolet detection, excellent peak resolution was achieved. The combined cleanup and chromatographic method steps reported herein were sensitive and reliable for determining the imazaquin residues in soybean samples. This method is characterized by recovery >88.4%, precision <6.7% CV, and sensitivity of 0.005 ppm, in agreement with directives for method validation in residue analysis. Imazaquin residues in soybeans were further confirmed by high performance liquid chromatography-mass spectrometry (LC-MS). The proposed method was successfully applied to the analysis of imazaquin residues in soybean samples grown in an experimental field after treatments of imazaquin formulation.

  5. Optimal design of an electro-hydraulic valve for heavy-duty vehicle clutch actuator with certain constraints

    NASA Astrophysics Data System (ADS)

    Meng, Fei; Shi, Peng; Karimi, Hamid Reza; Zhang, Hui

    2016-02-01

    The main objective of this paper is to investigate the sensitivity analysis and optimal design of a proportional solenoid valve (PSV) operated pressure reducing valve (PRV) for heavy-duty automatic transmission clutch actuators. The nonlinear electro-hydraulic valve model is developed based on fluid dynamics. In order to implement the sensitivity analysis and optimization for the PRV, the PSV model is validated by comparing the results with data obtained from a real test-bench. The sensitivity of the PSV pressure response with regard to the structural parameters is investigated by using Sobol's method. Finally, simulations and experimental investigations are performed on the optimized prototype and the results reveal that the dynamical characteristics of the valve have been improved in comparison with the original valve.

  6. Cutoff values for bacteria and leukocytes for urine sediment analyzer FUS200 in culture-positive urinary-tract infections.

    PubMed

    Kocer, Derya; Sarıguzel, Fatma M; Karakukcu, Cıgdem

    2014-08-01

    The microscopic analysis of urine is essential for the diagnosis of patients with urinary tract infections. Quantitative urine culture is the 'gold standard' method for definitive diagnosis of urinary-tract infections, but it is labor-intensive, time consuming, and does not provide the same-day results. The aim of this study was to evaluate the analytical and diagnostic performance of the FUS200 (Changchun Dirui Industry, China), a new urine sedimentation analyzer in comparison to urine culture as the reference method. We evaluated 1000 urine samples, submitted for culture and urine analysis with a preliminary diagnosis of urinary-tract infection. Cut-off values for the FUS200 were determined by comparing the results with urine cultures. The cut-off values by the receiver operating characteristic (ROC) curve technique, sensitivity, and specificity were calculated for bacteria and white blood cells (WBCs). Among the 1000 urine specimens submitted for culture, 637 cultures (63.7%) were negative, and 363 were (36.3%) positive. The best cut-off values obtained from ROC analysis were 16/μL for bacteriuria (sensitivity: 82.3%, specificity: 58%), and 34/μL for WBCs (sensitivity: 72.3%, specificity: 65.2%). The area under the curve (AUC) for the bacteria and WBCs count were 0.71 (95% CI: 0.67-0.74) and, 0.72 (95% CI: 0.69-0.76) respectively. The most important requirement of a rapid diagnostic screening test is sensitivity, and, in this perspective, an unsatisfactory sensitivity by using bacteria recognition and quantification performed by the FUS200 analyzer has been observed. After further technical improvements in particle recognition and laboratory personnel training, the FUS200 might show better results.

  7. Performance Model and Sensitivity Analysis for a Solar Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Rehman, Naveed Ur; Siddiqui, Mubashir Ali

    2017-03-01

    In this paper, a regression model for evaluating the performance of solar concentrated thermoelectric generators (SCTEGs) is established and the significance of contributing parameters is discussed in detail. The model is based on several natural, design and operational parameters of the system, including the thermoelectric generator (TEG) module and its intrinsic material properties, the connected electrical load, concentrator attributes, heat transfer coefficients, solar flux, and ambient temperature. The model is developed by fitting a response curve, using the least-squares method, to the results. The sample points for the model were obtained by simulating a thermodynamic model, also developed in this paper, over a range of values of input variables. These samples were generated employing the Latin hypercube sampling (LHS) technique using a realistic distribution of parameters. The coefficient of determination was found to be 99.2%. The proposed model is validated by comparing the predicted results with those in the published literature. In addition, based on the elasticity for parameters in the model, sensitivity analysis was performed and the effects of parameters on the performance of SCTEGs are discussed in detail. This research will contribute to the design and performance evaluation of any SCTEG system for a variety of applications.

  8. Trace analysis in the food and beverage industry by capillary gas chromatography: system performance and maintenance.

    PubMed

    Hayes, M A

    1988-04-01

    Gas chromatography (GC) is the most widely used analytical technique in the food and beverage industry. This paper addresses the problems of sample preparation and system maintenance to ensure the most sensitive, durable, and efficient results for trace analysis by GC in this industry.

  9. Efficiency in the Community College Sector: Stochastic Frontier Analysis

    ERIC Educational Resources Information Center

    Agasisti, Tommaso; Belfield, Clive

    2017-01-01

    This paper estimates technical efficiency scores across the community college sector in the United States. Using stochastic frontier analysis and data from the Integrated Postsecondary Education Data System for 2003-2010, we estimate efficiency scores for 950 community colleges and perform a series of sensitivity tests to check for robustness. We…

  10. Increasing productivity for the analysis of trace contaminants in food by gas chromatography-mass spectrometry using automated liner exchange, backflushing and heart-cutting.

    PubMed

    David, Frank; Tienpont, Bart; Devos, Christophe; Lerch, Oliver; Sandra, Pat

    2013-10-25

    Laboratories focusing on residue analysis in food are continuously seeking to increase sample throughput by minimizing sample preparation. Generic sample extraction methods such as QuEChERS lack selectivity and consequently extracts are not free from non-volatile material that contaminates the analytical system. Co-extracted matrix constituents interfere with target analytes, even if highly sensitive and selective GC-MS/MS is used. A number of GC approaches are described that can be used to increase laboratory productivity. These techniques include automated inlet liner exchange and column backflushing for preservation of the performance of the analytical system and heart-cutting two-dimensional GC for increasing sensitivity and selectivity. The application of these tools is illustrated by the analysis of pesticides in vegetables and fruits, PCBs in milk powder and coplanar PCBs in fish. It is demonstrated that considerable increase in productivity can be achieved by decreasing instrument down-time, while analytical performance is equal or better compared to conventional trace contaminant analysis. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Uncertainty and sensitivity analysis for two-phase flow in the vicinity of the repository in the 1996 performance assessment for the Waste Isolation Pilot Plant: Disturbed conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HELTON,JON CRAIG; BEAN,J.E.; ECONOMY,K.

    2000-05-22

    Uncertainty and sensitivity analysis results obtained in the 1996 performance assessment (PA) for the Waste Isolation Pilot Plant (WIPP) are presented for two-phase flow in the vicinity of the repository under disturbed conditions resulting from drilling intrusions. Techniques based on Latin hypercube sampling, examination of scatterplots, stepwise regression analysis, partial correlation analysis and rank transformations are used to investigate brine inflow, gas generation repository pressure, brine saturation and brine and gas outflow. Of the variables under study, repository pressure and brine flow from the repository to the Culebra Dolomite are potentially the most important in PA for the WIPP. Subsequentmore » to a drilling intrusion repository pressure was dominated by borehole permeability and generally below the level (i.e., 8 MPa) that could potentially produce spallings and direct brine releases. Brine flow from the repository to the Culebra Dolomite tended to be small or nonexistent with its occurrence and size also dominated by borehole permeability.« less

  12. Sensitivity of simulated maize crop yields to regional climate in the Southwestern United States

    NASA Astrophysics Data System (ADS)

    Kim, S.; Myoung, B.; Stack, D.; Kim, J.; Hatzopoulos, N.; Kafatos, M.

    2013-12-01

    The sensitivity of maize yield to the regional climate in the Southwestern United States (SW US) has been investigated by using a crop-yield simulation model (APSIM) in conjunction with meteorological forcings (daily minimum and maximum temperature, precipitation, and radiation) from the North American Regional Reanalysis (NARR) dataset. The primary focus of this study is to look at the effects of interannual variations of atmospheric components on the crop productivity in the SW US over the 21-year period (1991 to 2011). First of all, characteristics and performance of APSIM was examined by comparing simulated maize yields with observed yields from United States Department of Agriculture (USDA) and the leaf-area index (LAI) from MODIS satellite data. Comparisons of the simulated maize yield with the available observations show that the crop model can reasonably reproduce observed maize yields. Sensitivity tests were performed to assess the relative contribution of each climate driver to regional crop yield. Sensitivity experiments show that potential crop production responds nonlinearly to climate drivers and the yield sensitivity varied among geographical locations depending on their mean climates. Lastly, a detailed analysis of both the spatial and temporal variations of each climate driver in the regions where maize is actually grown in three states (CA, AZ, and NV) in the SW US was performed.

  13. Stabilization of flow past a rounded cylinder

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Zhang, Wei

    2016-11-01

    We perform global linear stability analysis on low-Re flow past a rounded cylinder. The cylinder corners are rounded with a radius R, normalized as R+ = R / D where D is the cylinder diameter, and its effect on the flow stability characteristics is investigated. We compute the critical Reynolds number (Recr) for the onset of first instability, and quantify the perturbation growth rate for the super-critical flows. It is found that the flow can be stabilized by partially rounding the cylinder. Compared with the square and circular cylinders, the partially rounded cylinder has a higher Recr , attaining a maximum at around R+ = 0 . 30 , and the perturbation growth rate of the super-critical flows is reduced for Re <= 100 . We perform sensitivity analysis to explore the source of the stabilization. The growth rate sensitivity to base flow modification has two different spatial structures: the growth rate is sensitive to the wake backflow in a large region for square-like cylinders (R+ -> 0 . 00), while only the near-wake backflow is crucial for circular-like cylinders (R+ -> 0 . 50). The stability analysis results are also verified with those of the direct simulations and very good agreement is achieved. Supported by the KAUST Office of Competitive Research Funds under Award No. URF/1/1394-01. The supercomputer Shaheen at KAUST was utilized for the simulations.

  14. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  15. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  16. Testing the effectiveness of simplified search strategies for updating systematic reviews.

    PubMed

    Rice, Maureen; Ali, Muhammad Usman; Fitzpatrick-Lewis, Donna; Kenny, Meghan; Raina, Parminder; Sherifali, Diana

    2017-08-01

    The objective of the study was to test the overall effectiveness of a simplified search strategy (SSS) for updating systematic reviews. We identified nine systematic reviews undertaken by our research group for which both comprehensive and SSS updates were performed. Three relevant performance measures were estimated, that is, sensitivity, precision, and number needed to read (NNR). The update reference searches for all nine included systematic reviews identified a total of 55,099 citations that were screened resulting in final inclusion of 163 randomized controlled trials. As compared with reference search, the SSS resulted in 8,239 hits and had a median sensitivity of 83.3%, while precision and NNR were 4.5 times better. During analysis, we found that the SSS performed better for clinically focused topics, with a median sensitivity of 100% and precision and NNR 6 times better than for the reference searches. For broader topics, the sensitivity of the SSS was 80% while precision and NNR were 5.4 times better compared with reference search. SSS performed well for clinically focused topics and, with a median sensitivity of 100%, could be a viable alternative to a conventional comprehensive search strategy for updating this type of systematic reviews particularly considering the budget constraints and the volume of new literature being published. For broader topics, 80% sensitivity is likely to be considered too low for a systematic review update in most cases, although it might be acceptable if updating a scoping or rapid review. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System

    NASA Technical Reports Server (NTRS)

    Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.

    2003-01-01

    The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.

  18. Application of the JENDL-4.0 nuclear data set for uncertainty analysis of the prototype FBR Monju

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with uncertainty analysis of the Monju reactor using JENDL-4.0 and the ERANOS code 1. In 2010 the Japan Atomic Energy Agency - JAEA - released the JENDL-4.0 nuclear data set. This new evaluation contains improved values of cross-sections and emphasizes accurate covariance matrices. Also in 2010, JAEA restarted the sodium-cooled fast reactor prototype Monju after about 15 years of shutdown. The long shutdown time resulted in a build-up of {sup 241}Am by natural decay from the initially loaded Pu. As well as improved covariance matrices, JENDL-4.0 is announced to contain improved data for minor actinides 2. Themore » choice of Monju reactor as an application of the new evaluation seems then even more relevant. The uncertainty analysis requires the determination of sensitivity coefficients. The well-established ERANOS code was chosen because of its integrated modules that allow users to perform sensitivity and uncertainty analysis. A JENDL-4.0 cross-sections library is not available for ERANOS. Therefor a cross-sections library had to be made from the original ENDF files for the ECCO cell code (part of ERANOS). For confirmation of the newly made library, calculations of a benchmark core were performed. These calculations used the MZA and MZB benchmarks and showed consistent results with other libraries. Calculations for the Monju reactor were performed using hexagonal 3D geometry and PN transport theory. However, the ERANOS sensitivity modules cannot use the resulting fluxes, as these modules require finite differences based fluxes, obtained from RZ SN-transport or 3D diffusion calculations. The corresponding geometrical models have been made and the results verified with Monju restart experimental data 4. Uncertainty analysis was performed using the RZ model. JENDL-4.0 uncertainty analysis showed a significant reduction of the uncertainty related to the fission cross-section of Pu along with an increase of the uncertainty related to the capture cross-section of {sup 238}U compared with the previous JENDL-3.3 version. Covariance data recently added in JENDL-4.0 for {sup 241}Am appears to have a non-negligible contribution. (authors)« less

  19. Polymorphisms of three genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system are not associated with blood pressure salt sensitivity: A systematic meta-analysis.

    PubMed

    Sun, Jiahong; Zhao, Min; Miao, Song; Xi, Bo

    2016-01-01

    Many studies have suggested that polymorphisms of three key genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system (RAAS) play important roles in the development of blood pressure (BP) salt sensitivity, but they have revealed inconsistent results. Thus, we performed a meta-analysis to clarify the association. PubMed and Embase databases were searched for eligible published articles. Fixed- or random-effect models were used to pool odds ratios and 95% confidence intervals based on whether there was significant heterogeneity between studies. In total, seven studies [237 salt-sensitive (SS) cases and 251 salt-resistant (SR) controls] for ACE gene I/D polymorphism, three studies (130 SS cases and 221 SR controls) for AGT gene M235T polymorphism and three studies (113 SS cases and 218 SR controls) for CYP11B2 gene C344T polymorphism were included in this meta-analysis. The results showed that there was no significant association between polymorphisms of these three polymorphisms in the RAAS and BP salt sensitivity under three genetic models (all p > 0.05). The meta-analysis suggested that three polymorphisms (ACE gene I/D, AGT gene M235T, CYP11B2 gene C344T) in the RAAS have no significant effect on BP salt sensitivity.

  20. Sensitivity of the stanford sleepiness scale to the effects of cumulative partial sleep deprivation and recovery oversleeping.

    PubMed

    Herscovitch, J; Broughton, R

    1981-01-01

    The sensitivity of the Stanford Sleepiness Scale (SSS) to short-term cumulative partial sleep deprivation (PSD) and subsequent recovery oversleeping was examined. A repeated-measures design included 7 paid healthy undergraduate volunteers, who were normal sleepers (mean sleep time 7.6 hr), and consisted of the following schedule: (a) pre-baseline; (b)sleep reduction of 40% of 1 night (mean, 4.6 hr) for 5 nights; (c) recovery oversleeping for night 1 (mean, 10.6 Hr) and night 2 (mean, 9.1 hr); (d) post-baseline. Daytime performance testing utilized a 1 hr auditory vigilance task and four short-duration (10 min) tests, two of which have been shown sensitive to total and partial sleep loss effects. Subjects completed SSS forms every min while awake and 1-9 scales of mood and energy upon awakening. Subjective measures were analyzed across conditions for mean all-day and task-related SSS values and mood and energy ratings. A correlational analysis investigated individual correspondences between ratings and performance. Results indicate that SSS is sensitive to deficits in alertness following PSD. However, it generally does not predict individual performance efficiency and therefore cannot act as a substitute for performance measures in studies involving chronic sleep loss.

  1. Development of a Mars Airplane Entry, Descent, and Flight Trajectory

    NASA Technical Reports Server (NTRS)

    Murray, James E.; Tartabini, Paul V.

    2001-01-01

    An entry, descent, and flight (EDF) trajectory profile for a Mars airplane mission is defined as consisting of the following elements: ballistic entry of an aeroshell; supersonic deployment of a decelerator parachute; subsonic release of a heat shield; release, unfolding, and orientation of an airplane to flight attitude; and execution of a pull up maneuver to achieve trimmed, horizontal flight. Using the Program to Optimize Simulated Trajectories (POST) a trajectory optimization problem was formulated. Model data representative of a specific Mars airplane configuration, current models of the Mars surface topography and atmosphere, and current estimates of the interplanetary trajectory, were incorporated into the analysis. The goal is to develop an EDF trajectory to maximize the surface-relative altitude of the airplane at the end of a pull up maneuver, while subject to the mission design constraints. The trajectory performance was evaluated for three potential mission sites and was found to be site-sensitive. The trajectory performance, examined for sensitivity to a number of design and constraint variables, was found to be most sensitive to airplane mass, aerodynamic performance characteristics, and the pull up Mach constraint. Based on the results of this sensitivity study, an airplane-drag optimized trajectory was developed that showed a significant performance improvement.

  2. Establishing the Capability of a 1D SVAT Modelling Scheme in Predicting Key Biophysical Vegetation Characterisation Parameters

    NASA Astrophysics Data System (ADS)

    Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah

    2015-04-01

    Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.

  3. Structural reliability methods: Code development status

    NASA Astrophysics Data System (ADS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-05-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  4. Structural reliability methods: Code development status

    NASA Technical Reports Server (NTRS)

    Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.

    1991-01-01

    The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.

  5. Validity and usefulness of the Line Drill test for adolescent basketball players: a Bayesian multilevel analysis.

    PubMed

    Carvalho, Humberto M; Gonçalves, Carlos E; Grosgeorge, Bernard; Paes, Roberto R

    2017-01-01

    The study examined the validity of the Line Drill test (LD) in male adolescent basketball players (10-15 years). Sensitiveness of the LD to changes in performance across a training and competition season (4 months) was also considered. Age, maturation, body size and LD were measured (n = 57). Sensitiveness of the LD was examined pre- and post-competitive season in a sub-sample (n = 44). The time at each of the four shuttle sprints of the LD (i.e. four stages) was modelled with Bayesian multilevel models. We observed very large correlation of performance at stage 4 (full LD protocol) with stage 3, but lower correlations with the early LD stages. Players' performance by somatic maturity differed substantially only when considering full LD protocol performance. Substantial improvements in all stages of the protocol were observed across the 4-month competitive season. The LD protocol should be shortened by the last full court shuttle sprint, remaining sensitive to training exposure, and independent of maturity status and body size.

  6. The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.

    PubMed

    Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing

    2017-07-01

    This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ significantly.

  7. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  8. [MODIS Investigation

    NASA Technical Reports Server (NTRS)

    Abbott, Mark R.

    1996-01-01

    Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.

  9. Sensitivity analysis of a sediment dynamics model applied in a Mediterranean river basin: global change and management implications.

    PubMed

    Sánchez-Canales, M; López-Benito, A; Acuña, V; Ziv, G; Hamel, P; Chaplin-Kramer, R; Elorza, F J

    2015-01-01

    Climate change and land-use change are major factors influencing sediment dynamics. Models can be used to better understand sediment production and retention by the landscape, although their interpretation is limited by large uncertainties, including model parameter uncertainties. The uncertainties related to parameter selection may be significant and need to be quantified to improve model interpretation for watershed management. In this study, we performed a sensitivity analysis of the InVEST (Integrated Valuation of Environmental Services and Tradeoffs) sediment retention model in order to determine which model parameters had the greatest influence on model outputs, and therefore require special attention during calibration. The estimation of the sediment loads in this model is based on the Universal Soil Loss Equation (USLE). The sensitivity analysis was performed in the Llobregat basin (NE Iberian Peninsula) for exported and retained sediment, which support two different ecosystem service benefits (avoided reservoir sedimentation and improved water quality). Our analysis identified the model parameters related to the natural environment as the most influential for sediment export and retention. Accordingly, small changes in variables such as the magnitude and frequency of extreme rainfall events could cause major changes in sediment dynamics, demonstrating the sensitivity of these dynamics to climate change in Mediterranean basins. Parameters directly related to human activities and decisions (such as cover management factor, C) were also influential, especially for sediment exported. The importance of these human-related parameters in the sediment export process suggests that mitigation measures have the potential to at least partially ameliorate climate-change driven changes in sediment exportation. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. A comprehensive approach to identify dominant controls of the behavior of a land surface-hydrology model across various hydroclimatic conditions

    NASA Astrophysics Data System (ADS)

    Haghnegahdar, Amin; Elshamy, Mohamed; Yassin, Fuad; Razavi, Saman; Wheater, Howard; Pietroniro, Al

    2017-04-01

    Complex physically-based environmental models are being increasingly used as the primary tool for watershed planning and management due to advances in computation power and data acquisition. Model sensitivity analysis plays a crucial role in understanding the behavior of these complex models and improving their performance. Due to the non-linearity and interactions within these complex models, Global sensitivity analysis (GSA) techniques should be adopted to provide a comprehensive understanding of model behavior and identify its dominant controls. In this study we adopt a multi-basin multi-criteria GSA approach to systematically assess the behavior of the Modélisation Environmentale-Surface et Hydrologie (MESH) across various hydroclimatic conditions in Canada including areas in the Great Lakes Basin, Mackenzie River Basin, and South Saskatchewan River Basin. MESH is a semi-distributed physically-based coupled land surface-hydrology modelling system developed by Environment and Climate Change Canada (ECCC) for various water resources management purposes in Canada. We use a novel method, called Variogram Analysis of Response Surfaces (VARS), to perform sensitivity analysis. VARS is a variogram-based GSA technique that can efficiently provide a spectrum of sensitivity information across a range of scales within the parameter space. We use multiple metrics to identify dominant controls of model response (e.g. streamflow) to model parameters under various conditions such as high flows, low flows, and flow volume. We also investigate the influence of initial conditions on model behavior as part of this study. Our preliminary results suggest that this type of GSA can significantly help with estimating model parameters, decreasing calibration computational burden, and reducing prediction uncertainty.

  11. Sensitivity analysis of Monju using ERANOS with JENDL-4.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamagno, P.; Van Rooijen, W. F. G.; Takeda, T.

    2012-07-01

    This paper deals with sensitivity analysis using JENDL-4.0 nuclear data applied to the Monju reactor. In 2010 the Japan Atomic Energy Agency - JAEA - released a new set of nuclear data: JENDL-4.0. This new evaluation is expected to contain improved data on actinides and covariance matrices. Covariance matrices are a key point in quantification of uncertainties due to basic nuclear data. For sensitivity analysis, the well-established ERANOS [1] code was chosen because of its integrated modules that allow users to perform a sensitivity analysis of complex reactor geometries. A JENDL-4.0 cross-section library is not available for ERANOS. Therefore amore » cross-section library had to be made from the original nuclear data set, available as ENDF formatted files. This is achieved by using the following codes: NJOY, CALENDF, MERGE and GECCO in order to create a library for the ECCO cell code (part of ERANOS). In order to make sure of the accuracy of the new ECCO library, two benchmark experiments have been analyzed: the MZA and MZB cores of the MOZART program measured at the ZEBRA facility in the UK. These were chosen due to their similarity to the Monju core. Using the JENDL-4.0 ECCO library we have analyzed the criticality of Monju during the restart in 2010. We have obtained good agreement with the measured criticality. Perturbation calculations have been performed between JENDL-3.3 and JENDL-4.0 based models. The isotopes {sup 239}Pu, {sup 238}U, {sup 241}Am and {sup 241}Pu account for a major part of observed differences. (authors)« less

  12. Potential Impact on Spatial Access to Surgery Under a Low Volume Pledge: A Population-Level Analysis of Patients Undergoing Pancreatectomy

    PubMed Central

    Fong, Zhi Ven; Loehrer, Andrew P; Castillo, Carlos Fernández-del; Bababekov, Yanik J; Jin, Ginger; Ferrone, Cristina R; Warshaw, Andrew L; Traeger, Lara N; Hutter, Matthew M; Lillemoe, Keith D; Chang, David C

    2018-01-01

    Background A minimum-volume policy restricting hospitals not meeting the threshold from performing complex surgery may increase travel burden and decrease spatial access to surgery. We aim to identify vulnerable populations that would be sensitive to an added travel burden. Methods We performed a retrospective analysis of the California Office of Statewide Health Planning and Development database for patients undergoing pancreatectomy from 2005 to 2014. Number of hospitals bypassed was used as a metric for travel. Patients bypassing fewer hospitals were deemed to be more sensitive to an added travel burden. Results There were 13,374 patients who underwent a pancreatectomy, of which 2,368 (17.7%) were non-bypassers. On unadjusted analysis, patients >80 year old travelled less than their younger counterparts, bypassing a mean of 10.9 ± 9.5 hospitals compared to 14.2 ± 21.3 hospitals bypassed by the 35–49 year old age group (p<0.001). Racial minorities travelled less when compared to Non-Hispanic Whites (p<0.001). Patients identifying their payer status as self-pay (8.9 ± 15.6 hospitals bypassed) and Medicaid (10.1 ± 17.2 hospitals bypassed) also travelled less when compared to patients with private insurance (13.8 ± 20.4 hospitals bypassed, p<0.001). On multivariate analysis, advanced age, racial minority and patients with self-pay or Medicaid payer status were independently associated with increased sensitivity to an added travel burden. Conclusion In patients undergoing pancreatectomy, the elderly, racial minorities and patients with self-pay or Medicaid payer status were associated with an increased sensitivity to an added travel burden. This vulnerable cohort may be disproportionately affected by a minimum-volume policy. PMID:28504112

  13. A New Framework for Effective and Efficient Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin

    2015-04-01

    Earth and Environmental Systems (EES) models are essential components of research, development, and decision-making in science and engineering disciplines. With continuous advances in understanding and computing power, such models are becoming more complex with increasingly more factors to be specified (model parameters, forcings, boundary conditions, etc.). To facilitate better understanding of the role and importance of different factors in producing the model responses, the procedure known as 'Sensitivity Analysis' (SA) can be very helpful. Despite the availability of a large body of literature on the development and application of various SA approaches, two issues continue to pose major challenges: (1) Ambiguous Definition of Sensitivity - Different SA methods are based in different philosophies and theoretical definitions of sensitivity, and can result in different, even conflicting, assessments of the underlying sensitivities for a given problem, (2) Computational Cost - The cost of carrying out SA can be large, even excessive, for high-dimensional problems and/or computationally intensive models. In this presentation, we propose a new approach to sensitivity analysis that addresses the dual aspects of 'effectiveness' and 'efficiency'. By effective, we mean achieving an assessment that is both meaningful and clearly reflective of the objective of the analysis (the first challenge above), while by efficiency we mean achieving statistically robust results with minimal computational cost (the second challenge above). Based on this approach, we develop a 'global' sensitivity analysis framework that efficiently generates a newly-defined set of sensitivity indices that characterize a range of important properties of metric 'response surfaces' encountered when performing SA on EES models. Further, we show how this framework embraces, and is consistent with, a spectrum of different concepts regarding 'sensitivity', and that commonly-used SA approaches (e.g., Sobol, Morris, etc.) are actually limiting cases of our approach under specific conditions. Multiple case studies are used to demonstrate the value of the new framework. The results show that the new framework provides a fundamental understanding of the underlying sensitivities for any given problem, while requiring orders of magnitude fewer model runs.

  14. System cost performance analysis (study 2.3). Volume 1: Executive summary. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.

  15. Mixed-bed ion exchange chromatography employing a salt-free pH gradient for improved sensitivity and compatibility in MudPIT.

    PubMed

    Mommen, Geert P M; Meiring, Hugo D; Heck, Albert J R; de Jong, Ad P J M

    2013-07-16

    In proteomics, comprehensive analysis of peptides mixtures necessitates multiple dimensions of separation prior to mass spectrometry analysis to reduce sample complexity and increase the dynamic range of analysis. The main goal of this work was to improve the performance of (online) multidimensional protein identification technology (MudPIT) in terms of sensitivity, compatibility and recovery. The method employs weak anion and strong cation mixed-bed ion exchange chromatography (ACE) in the first separation dimension and reversed phase chromatography (RP) in the second separation dimension (Motoyama et.al. Anal. Chem 2007, 79, 3623-34.). We demonstrated that the chromatographic behavior of peptides in ACE chromatography depends on both the WAX/SCX mixing ratio as the ionic strength of the mobile phase system. This property allowed us to replace the conventional salt gradient by a (discontinuous) salt-free, pH gradient. First dimensional separation of peptides was accomplished with mixtures of aqueous formic acid and dimethylsulfoxide with increasing concentrations. The overall performance of this mobile phase system was found comparable to ammonium acetate buffers in application to ACE chromatography, but clearly outperformed strong cation exchange for use in first dimensional peptide separation. The dramatically improved compatibility between (salt-free) ion exchange chromatography and reversed phase chromatography-mass spectrometry allowed us to downscale the dimensions of the RP analytical column down to 25 μm i.d. for an additional 2- to 3-fold improvement in performance compared to current technology. The achieved levels of sensitivity, orthogonality, and compatibility demonstrates the potential of salt-free ACE MudPIT for the ultrasensitive, multidimensional analysis of very modest amounts of sample material.

  16. Sensitive determination of thiols in wine samples by a stable isotope-coded derivatization reagent d0/d4-acridone-10-ethyl-N-maleimide coupled with high-performance liquid chromatography-electrospray ionization-tandem mass spectrometry analysis.

    PubMed

    Lv, Zhengxian; You, Jinmao; Lu, Shuaimin; Sun, Weidi; Ji, Zhongyin; Sun, Zhiwei; Song, Cuihua; Chen, Guang; Li, Guoliang; Hu, Na; Zhou, Wu; Suo, Yourui

    2017-03-31

    As the key aroma compounds, varietal thiols are the crucial odorants responsible for the flavor of wines. Quantitative analysis of thiols can provide crucial information for the aroma profiles of different wine styles. In this study, a rapid and sensitive method for the simultaneous determination of six thiols in wine using d 0 /d 4 -acridone-10-ethyl-N-maleimide (d 0 /d 4 -AENM) as stable isotope-coded derivatization reagent (SICD) by high performance liquid chromatography-electrospray ionization-tandem mass spectrometry (HPLC-ESI-MS/MS) has been developed. Quantification of thiols was performed by using d 4 -AENM labeled thiols as the internal standards (IS), followed by stable isotope dilution HPLC-ESI-MS/MS analysis. The AENM derivatization combined with multiple reactions monitoring (MRM) not only allowed trace analysis of thiols due to the extremely high sensitivity, but also efficiently corrected the matrix effects during HPLC-MS/MS and the fluctuation in MS/MS signal intensity due to instrument. The obtained internal standard calibration curves for six thiols were linear over the range of 25-10,000pmol/L (R 2 ≥0.9961). Detection limits (LODs) for most of analytes were below 6.3pmol/L. The proposed method was successfully applied for the simultaneous determination of six kinds of thiols in wine samples with precisions ≤3.5% and recoveries ≥78.1%. In conclusion, the developed method is expected to be a promising tool for detection of trace thiols in wine and also in other complex matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The combination of pH monitoring in the most distal esophagus and symptom association analysis markedly improves the clinical value of esophageal pH tests.

    PubMed

    Hall, Mats Guerrero Garcia; Wenner, Jörgen; Öberg, Stefan

    2016-01-01

    The poor sensitivity of esophageal pH monitoring substantially limits the clinical value of the test. The aim of this study was to compare the diagnostic accuracy of esophageal pH monitoring and symptom association analysis performed at the conventional level with that obtained in the most distal esophagus. Eighty-two patients with typical reflux symptoms and 49 asymptomatic subjects underwent dual 48-h pH monitoring with the electrodes positioned immediately above, and 6 cm above the squamo-columnar junction (SCJ). The degree of esophageal acid exposure and the temporal relationship between reflux events and symptoms were evaluated. The sensitivity of pH recording and the diagnostic yield of Symptom Association Probability (SAP) were significantly higher for pH monitoring performed at the distal compared with the conventional level (82% versus 65%, p<0.001 and 74% versus 62%, p<0.001, respectively). The greatest improvement was observed in patients with non-erosive disease. In this group, the sensitivity increased from 46% at the standard level to 66% immediately above the SCJ, and with the combination of a positive SAP as a marker for a positive pH test, the diagnostic yield further increased to 94%. The diagnostic accuracy of esophageal pH monitoring in the most distal esophagus is superior to that performed at the conventional level and it is further improved with the combination of symptom association analysis. PH monitoring with the pH electrode positioned immediately above the SCJ should be introduced in clinical practice and always combined with symptom association analysis.

  18. Defining the optimal therapy sequence in synchronous resectable liver metastases from colorectal cancer: a decision analysis approach.

    PubMed

    Van Dessel, E; Fierens, K; Pattyn, P; Van Nieuwenhove, Y; Berrevoet, F; Troisi, R; Ceelen, W

    2009-01-01

    Approximately 5%-20% of colorectal cancer (CRC) patients present with synchronous potentially resectable liver metastatic disease. Preclinical and clinical studies suggest a benefit of the 'liver first' approach, i.e. resection of the liver metastasis followed by resection of the primary tumour. A formal decision analysis may support a rational choice between several therapy options. Survival and morbidity data were retrieved from relevant clinical studies identified by a Web of Science search. Data were entered into decision analysis software (TreeAge Pro 2009, Williamstown, MA, USA). Transition probabilities including the risk of death from complications or disease progression associated with individual therapy options were entered into the model. Sensitivity analysis was performed to evaluate the model's validity under a variety of assumptions. The result of the decision analysis confirms the superiority of the 'liver first' approach. Sensitivity analysis demonstrated that this assumption is valid on condition that the mortality associated with the hepatectomy first is < 4.5%, and that the mortality of colectomy performed after hepatectomy is < 3.2%. The results of this decision analysis suggest that, in patients with synchronous resectable colorectal liver metastases, the 'liver first' approach is to be preferred. Randomized trials will be needed to confirm the results of this simulation based outcome.

  19. Characterization of Signal Quality Monitoring Techniques for Multipath Detection in GNSS Applications.

    PubMed

    Pirsiavash, Ali; Broumandan, Ali; Lachapelle, Gérard

    2017-07-05

    The performance of Signal Quality Monitoring (SQM) techniques under different multipath scenarios is analyzed. First, SQM variation profiles are investigated as critical requirements in evaluating the theoretical performance of SQM metrics. The sensitivity and effectiveness of SQM approaches for multipath detection and mitigation are then defined and analyzed by comparing SQM profiles and multipath error envelopes for different discriminators. Analytical discussions includes two discriminator strategies, namely narrow and high resolution correlator techniques for BPSK(1), and BOC(1,1) signaling schemes. Data analysis is also carried out for static and kinematic scenarios to validate the SQM profiles and examine SQM performance in actual multipath environments. Results show that although SQM is sensitive to medium and long-delay multipath, its effectiveness in mitigating these ranges of multipath errors varies based on tracking strategy and signaling scheme. For short-delay multipath scenarios, the multipath effect on pseudorange measurements remains mostly undetected due to the low sensitivity of SQM metrics.

  20. A conflict analysis of 4D descent strategies in a metered, multiple-arrival route environment

    NASA Technical Reports Server (NTRS)

    Izumi, K. H.; Harris, C. S.

    1990-01-01

    A conflict analysis was performed on multiple arrival traffic at a typical metered airport. The Flow Management Evaluation Model (FMEM) was used to simulate arrival operations using Denver Stapleton's arrival route structure. Sensitivities of conflict performance to three different 4-D descent strategies (clear-idle Mach/Constant AirSpeed (CAS), constant descent angle Mach/CAS and energy optimal) were examined for three traffic mixes represented by those found at Denver Stapleton, John F. Kennedy and typical en route metering (ERM) airports. The Monte Carlo technique was used to generate simulation entry point times. Analysis results indicate that the clean-idle descent strategy offers the best compromise in overall performance. Performance measures primarily include susceptibility to conflict and conflict severity. Fuel usage performance is extrapolated from previous descent strategy studies.

  1. Comparison of DWI and 18F-FDG PET/CT for assessing preoperative N-staging in gastric cancer: evidence from a meta-analysis.

    PubMed

    Luo, Mingxu; Song, Hongmei; Liu, Gang; Lin, Yikai; Luo, Lintao; Zhou, Xin; Chen, Bo

    2017-10-13

    The diagnostic values of diffusion weighted imaging (DWI) and 18 F-fluorodeoxyglucose positron emission tomography/computed tomography ( 18 F-FDG PET/CT) for N-staging of gastric cancer (GC) were identified and compared. After a systematic search to identify relevant articles, meta-analysis was used to summarize the sensitivities, specificities, and areas under curves (AUCs) for DWI and PET/CT. To better understand the diagnostic utility of DWI and PET/CT for N-staging, the performance of multi-detector computed tomography (MDCT) was used as a reference. Fifteen studies were analyzed. The pooled sensitivity, specificity, and AUC with 95% confidence intervals of DWI were 0.79 (0.73-0.85), 0.69 (0.61-0.77), and 0.81 (0.77-0.84), respectively. For PET/CT, the corresponding values were 0.52 (0.39-0.64), 0.88 (0.61-0.97), and 0.66 (0.62-0.70), respectively. Comparison of the two techniques revealed DWI had higher sensitivity and AUC, but no difference in specificity. DWI exhibited higher sensitivity but lower specificity than MDCT, and 18 F-FDG PET/CT had lower sensitivity and equivalent specificity. Overall, DWI performed better than 18 F-FDG PET/CT for preoperative N-staging in GC. When the efficacy of MDCT was taken as a reference, DWI represented a complementary imaging technique, while 18 F-FDG PET/CT had limited utility for preoperative N-staging.

  2. The Role of Cleaning Products in Epidemic Allergic Contact Dermatitis to Methylchloroisothiazolinone/Methylisothiazolinone.

    PubMed

    Marrero-Alemán, Gabriel; Saavedra Santana, Pedro; Liuti, Federica; Hernández, Noelia; López-Jiménez, Esmeralda; Borrego, Leopoldo

    Sensitivity to methylchloroisothiazolinone (MCI)/methylisothiazolinone (MI) has increased rapidly over recent years. This increase is mainly related to the extensive use of high concentrations of MI in cosmetic products, although a growing number of cases of occupational allergic contact dermatitis are caused by MCI/MI. The aim of this study was to examine the association between the increase in MCI/MI sensitization and the work performed by the patients in our area. A retrospective study was undertaken of the records of a total of 1179 patients who had undergone contact skin patch tests for MCI/MI from January 2005 to December 2015. A multivariate logistic regression analysis was performed to identify the factors independently associated with sensitivity to MCI/MI. A constant increase in MCI/MI sensitization was observed over the observation period. The only work associated with a significant increase in the prevalence of MCI/MI sensitization was cleaning, with 38.5% of the cleaning professionals with MCI/MI sensitization consulting for cosmetics-related dermatitis. Occupational sensitization to MCI/MI in cleaning professionals is worryingly increasing. This, in turn, could possibly account for many cases of cosmetics-associated contact dermatitis. Our findings suggest that a review of the regulations with regard to isothiazolinone concentrations in industrial and household detergents is necessary.

  3. Propensity Score Analysis Comparing Videothoracoscopic Lobectomy With Thoracotomy: A French Nationwide Study.

    PubMed

    Pagès, Pierre-Benoit; Delpy, Jean-Philippe; Orsini, Bastien; Gossot, Dominique; Baste, Jean-Marc; Thomas, Pascal; Dahan, Marcel; Bernard, Alain

    2016-04-01

    Video-assisted thoracoscopic surgery (VATS) lobectomy has recently become the recommended approach for stage I non-small cell lung cancer. However, these guidelines are not based on any large randomized control trial. Our study used propensity scores and a sensitivity analysis to compare VATS lobectomy with open thoracotomy. From 2005 to 2012, 24,811 patients (95.1%) were operated on by open thoracotomy and 1,278 (4.9%) by VATS. The end points were 30-day postoperative death, postoperative complications, hospital stay, overall survival, and disease-free survival. Two propensity scores analyses were performed: matching and inverse probability of treatment weighting, and one sensitivity analysis to unmask potential hidden bias. A subgroup analysis was performed to compare "high-risk" with "low-risk" patients. Results are reported by odds ratios or hazard ratios and their 95% confidence intervals. Postoperative death was not significantly reduced by VATS whatever the analysis. Concerning postoperative complications, VATS significantly decreased the occurrence of atelectasis and pneumopathy with both analysis methods, but there were no differences in the occurrence of other postoperative complications. VATS did not provide a benefit for high-risk patients. The VATS approach decreased the hospital length of stay from 2.4 days (95% confidence interval, -1.7 to -3 days) to -4.68 days (95% confidence interval, -8.5 to 0.9 days). Overall survival and disease-free survival were not influenced by the surgical approach. The sensitivity analysis showed potential biases. The results must be interpreted carefully because of the differences observed according to the propensity scores method used. A multicenter randomized controlled trial is necessary to limit the biases. Copyright © 2016 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  4. Performance bounds on parallel self-initiating discrete-event

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1990-01-01

    The use is considered of massively parallel architectures to execute discrete-event simulations of what is termed self-initiating models. A logical process in a self-initiating model schedules its own state re-evaluation times, independently of any other logical process, and sends its new state to other logical processes following the re-evaluation. The interest is in the effects of that communication on synchronization. The performance is considered of various synchronization protocols by deriving upper and lower bounds on optimal performance, upper bounds on Time Warp's performance, and lower bounds on the performance of a new conservative protocol. The analysis of Time Warp includes the overhead costs of state-saving and rollback. The analysis points out sufficient conditions for the conservative protocol to outperform Time Warp. The analysis also quantifies the sensitivity of performance to message fan-out, lookahead ability, and the probability distributions underlying the simulation.

  5. EUS for the staging of gastric cancer: a meta-analysis.

    PubMed

    Mocellin, Simone; Marchet, Alberto; Nitti, Donato

    2011-06-01

    The role of EUS in the locoregional staging of gastric carcinoma is undefined. We aimed to comprehensively review and quantitatively summarize the available evidence on the staging performance of EUS. We systematically searched the MEDLINE, Cochrane, CANCERLIT, and EMBASE databases for relevant studies published until July 2010. Formal meta-analysis of diagnostic accuracy parameters was performed by using a bivariate random-effects model. Fifty-four studies enrolling 5601 patients with gastric cancer undergoing disease staging with EUS were eligible for the meta-analysis. EUS staging accuracy across eligible studies was measured by computing overall sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), and diagnostic odds ratio (DOR). EUS can differentiate T1-2 from T3-4 gastric cancer with high accuracy, with overall sensitivity, specificity, PLR, NLR, and DOR of 0.86 (95% CI, 0.81-0.90), 0.91 (95% CI, 0.89-0.93), 9.8 (95% CI, 7.5-12.8), 0.15 (95% CI, 0.11-0.21), and 65 (95% CI, 41-105), respectively. In contrast, the diagnostic performance of EUS for lymph node status is less reliable, with overall sensitivity, specificity, PLR, NLR, and DOR of 0.69 (95% CI, 0.63-0.74), 0.84 (95% CI, 0.81-0.88), 4.4 (95% CI, 3.6-5.4), 0.37 (95% CI, 0.32-0.44), and 12 (95% CI, 9-16), respectively. Results regarding single T categories (including T1 substages) and Bayesian nomograms to calculate posttest probabilities for any target condition prevalence are also provided. Statistical heterogeneity was generally high; unfortunately, subgroup analysis did not identify a consistent source of the heterogeneity. Our results support the use of EUS for the locoregional staging of gastric cancer, which can affect the therapeutic management of these patients. However, clinicians must be aware of the performance limits of this staging tool. Copyright © 2011 American Society for Gastrointestinal Endoscopy. Published by Mosby, Inc. All rights reserved.

  6. Photobiomodulation in the Prevention of Tooth Sensitivity Caused by In-Office Dental Bleaching. A Randomized Placebo Preliminary Study.

    PubMed

    Calheiros, Andrea Paiva Corsetti; Moreira, Maria Stella; Gonçalves, Flávia; Aranha, Ana Cecília Correa; Cunha, Sandra Ribeiro; Steiner-Oliveira, Carolina; Eduardo, Carlos de Paula; Ramalho, Karen Müller

    2017-08-01

    Analyze the effect of photobiomodulation in the prevention of tooth sensitivity after in-office dental bleaching. Tooth sensitivity is a common clinical consequence of dental bleaching. Therapies for prevention of sensitivity have been investigated in literature. This study was developed as a randomized, placebo blind clinical trial. Fifty patients were selected (n = 10) and randomly divided into five groups: (1) control, (2) placebo, (3) laser before bleaching, (4) laser after bleaching, and (5) laser before and after bleaching. Irradiation was performed perpendicularly, in contact, on each tooth during 10 sec per point in two points. The first point was positioned in the middle of the tooth crown and the second in the periapical region. Photobiomodulation was applied using the following parameters: 780 nm, 40 mW, 10 J/cm 2 , 0.4 J per point. Pain was analyzed before, immediately after, and seven subsequent days after bleaching. Patients were instructed to report pain using the scale: 0 = no tooth sensitivity, 1 = gentle sensitivity, 2 = moderate sensitivity, 3 = severe sensitivity. There were no statistical differences between groups at any time (p > 0.05). More studies, with others parameters and different methods of tooth sensitivity analysis, should be performed to complement the results found. Within the limitation of the present study, the laser parameters of photobiomodulation tested in the present study were not efficient in preventing tooth sensitivity after in-office bleaching.

  7. Analysis of transient fission gas behaviour in oxide fuel using BISON and TRANSURANUS

    NASA Astrophysics Data System (ADS)

    Barani, T.; Bruschi, E.; Pizzocri, D.; Pastore, G.; Van Uffelen, P.; Williamson, R. L.; Luzzi, L.

    2017-04-01

    The modelling of fission gas behaviour is a crucial aspect of nuclear fuel performance analysis in view of the related effects on the thermo-mechanical performance of the fuel rod, which can be particularly significant during transients. In particular, experimental observations indicate that substantial fission gas release (FGR) can occur on a small time scale during transients (burst release). To accurately reproduce the rapid kinetics of the burst release process in fuel performance calculations, a model that accounts for non-diffusional mechanisms such as fuel micro-cracking is needed. In this work, we present and assess a model for transient fission gas behaviour in oxide fuel, which is applied as an extension of conventional diffusion-based models to introduce the burst release effect. The concept and governing equations of the model are presented, and the sensitivity of results to the newly introduced parameters is evaluated through an analytic sensitivity analysis. The model is assessed for application to integral fuel rod analysis by implementation in two structurally different fuel performance codes: BISON (multi-dimensional finite element code) and TRANSURANUS (1.5D code). Model assessment is based on the analysis of 19 light water reactor fuel rod irradiation experiments from the OECD/NEA IFPE (International Fuel Performance Experiments) database, all of which are simulated with both codes. The results point out an improvement in both the quantitative predictions of integral fuel rod FGR and the qualitative representation of the FGR kinetics with the transient model relative to the canonical, purely diffusion-based models of the codes. The overall quantitative improvement of the integral FGR predictions in the two codes is comparable. Moreover, calculated radial profiles of xenon concentration after irradiation are investigated and compared to experimental data, illustrating the underlying representation of the physical mechanisms of burst release.

  8. Profitability analysis of a femtosecond laser system for cataract surgery using a fuzzy logic approach

    PubMed Central

    Trigueros, José Antonio; Piñero, David P; Ismail, Mahmoud M

    2016-01-01

    AIM To define the financial and management conditions required to introduce a femtosecond laser system for cataract surgery in a clinic using a fuzzy logic approach. METHODS In the simulation performed in the current study, the costs associated to the acquisition and use of a commercially available femtosecond laser platform for cataract surgery (VICTUS, TECHNOLAS Perfect Vision GmbH, Bausch & Lomb, Munich, Germany) during a period of 5y were considered. A sensitivity analysis was performed considering such costs and the countable amortization of the system during this 5y period. Furthermore, a fuzzy logic analysis was used to obtain an estimation of the money income associated to each femtosecond laser-assisted cataract surgery (G). RESULTS According to the sensitivity analysis, the femtosecond laser system under evaluation can be profitable if 1400 cataract surgeries are performed per year and if each surgery can be invoiced more than $500. In contrast, the fuzzy logic analysis confirmed that the patient had to pay more per surgery, between $661.8 and $667.4 per surgery, without considering the cost of the intraocular lens (IOL). CONCLUSION A profitability of femtosecond laser systems for cataract surgery can be obtained after a detailed financial analysis, especially in those centers with large volumes of patients. The cost of the surgery for patients should be adapted to the real flow of patients with the ability of paying a reasonable range of cost. PMID:27500115

  9. Three-dimensional texture analysis of contrast enhanced CT images for treatment response assessment in Hodgkin lymphoma: Comparison with F-18-FDG PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knogler, Thomas; El-Rabadi, Karem; Weber, Michael

    2014-12-15

    Purpose: To determine the diagnostic performance of three-dimensional (3D) texture analysis (TA) of contrast-enhanced computed tomography (CE-CT) images for treatment response assessment in patients with Hodgkin lymphoma (HL), compared with F-18-fludeoxyglucose (FDG) positron emission tomography/CT. Methods: 3D TA of 48 lymph nodes in 29 patients was performed on venous-phase CE-CT images before and after chemotherapy. All lymph nodes showed pathologically elevated FDG uptake at baseline. A stepwise logistic regression with forward selection was performed to identify classic CT parameters and texture features (TF) that enable the separation of complete response (CR) and persistent disease. Results: The TF fraction of imagemore » in runs, calculated for the 45° direction, was able to correctly identify CR with an accuracy of 75%, a sensitivity of 79.3%, and a specificity of 68.4%. Classical CT features achieved an accuracy of 75%, a sensitivity of 86.2%, and a specificity of 57.9%, whereas the combination of TF and CT imaging achieved an accuracy of 83.3%, a sensitivity of 86.2%, and a specificity of 78.9%. Conclusions: 3D TA of CE-CT images is potentially useful to identify nodal residual disease in HL, with a performance comparable to that of classical CT parameters. Best results are achieved when TA and classical CT features are combined.« less

  10. [Comparative evaluation of the sensitivity of Acinetobacter to colistin, using the prediffusion and minimum inhibitory concentration methods: detection of heteroresistant isolates].

    PubMed

    Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R

    2011-01-01

    The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.

  11. Barcoding T Cell Calcium Response Diversity with Methods for Automated and Accurate Analysis of Cell Signals (MAAACS)

    PubMed Central

    Sergé, Arnauld; Bernard, Anne-Marie; Phélipot, Marie-Claire; Bertaux, Nicolas; Fallet, Mathieu; Grenot, Pierre; Marguet, Didier; He, Hai-Tao; Hamon, Yannick

    2013-01-01

    We introduce a series of experimental procedures enabling sensitive calcium monitoring in T cell populations by confocal video-microscopy. Tracking and post-acquisition analysis was performed using Methods for Automated and Accurate Analysis of Cell Signals (MAAACS), a fully customized program that associates a high throughput tracking algorithm, an intuitive reconnection routine and a statistical platform to provide, at a glance, the calcium barcode of a population of individual T-cells. Combined with a sensitive calcium probe, this method allowed us to unravel the heterogeneity in shape and intensity of the calcium response in T cell populations and especially in naive T cells, which display intracellular calcium oscillations upon stimulation by antigen presenting cells. PMID:24086124

  12. Development of a Multilevel Optimization Approach to the Design of Modern Engineering Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Barthelemy, J. F. M.

    1983-01-01

    A general algorithm is proposed which carries out the design process iteratively, starting at the top of the hierarchy and proceeding downward. Each subproblem is optimized separately for fixed controls from higher level subproblems. An optimum sensitivity analysis is then performed which determines the sensitivity of the subproblem design to changes in higher level subproblem controls. The resulting sensitivity derivatives are used to construct constraints which force the controlling subproblems into chosing their own designs so as to improve the lower levels subproblem designs while satisfying their own constraints. The applicability of the proposed algorithm is demonstrated by devising a four-level hierarchy to perform the simultaneous aerodynamic and structural design of a high-performance sailplane wing for maximum cross-country speed. Finally, the concepts discussed are applied to the two-level minimum weight structural design of the sailplane wing. The numerical experiments show that discontinuities in the sensitivity derivatives may delay convergence, but that the algorithm is robust enough to overcome these discontinuities and produce low-weight feasible designs, regardless of whether the optimization is started from the feasible space or the infeasible one.

  13. Performance of the new automated Abbott RealTime MTB assay for rapid detection of Mycobacterium tuberculosis complex in respiratory specimens.

    PubMed

    Chen, J H K; She, K K K; Kwong, T-C; Wong, O-Y; Siu, G K H; Leung, C-C; Chang, K-C; Tam, C-M; Ho, P-L; Cheng, V C C; Yuen, K-Y; Yam, W-C

    2015-09-01

    The automated high-throughput Abbott RealTime MTB real-time PCR assay has been recently launched for Mycobacterium tuberculosis complex (MTBC) clinical diagnosis. This study would like to evaluate its performance. We first compared its diagnostic performance with the Roche Cobas TaqMan MTB assay on 214 clinical respiratory specimens. Prospective analysis of a total 520 specimens was then performed to further evaluate the Abbott assay. The Abbott assay showed a lower limit of detection at 22.5 AFB/ml, which was more sensitive than the Cobas assay (167.5 AFB/ml). The two assays demonstrated a significant difference in diagnostic performance (McNemar's test; P = 0.0034), in which the Abbott assay presented significantly higher area under curve (AUC) than the Cobas assay (1.000 vs 0.880; P = 0.0002). The Abbott assay demonstrated extremely low PCR inhibition on clinical respiratory specimens. The automated Abbott assay required only very short manual handling time (0.5 h), which could help to improve the laboratory management. In the prospective analysis, the overall estimates for sensitivity and specificity of the Abbott assay were both 100 % among smear-positive specimens, whereas the smear-negative specimens were 96.7 and 96.1 %, respectively. No cross-reactivity with non-tuberculosis mycobacterial species was observed. The superiority in sensitivity of the Abbott assay for detecting MTBC in smear-negative specimens could further minimize the risk in MTBC false-negative detection. The new Abbott RealTime MTB assay has good diagnostic performance which can be a useful diagnostic tool for rapid MTBC detection in clinical laboratories.

  14. Statistical performance of image cytometry for DNA, lipids, cytokeratin, & CD45 in a model system for circulation tumor cell detection.

    PubMed

    Futia, Gregory L; Schlaepfer, Isabel R; Qamar, Lubna; Behbakht, Kian; Gibson, Emily A

    2017-07-01

    Detection of circulating tumor cells (CTCs) in a blood sample is limited by the sensitivity and specificity of the biomarker panel used to identify CTCs over other blood cells. In this work, we present Bayesian theory that shows how test sensitivity and specificity set the rarity of cell that a test can detect. We perform our calculation of sensitivity and specificity on our image cytometry biomarker panel by testing on pure disease positive (D + ) populations (MCF7 cells) and pure disease negative populations (D - ) (leukocytes). In this system, we performed multi-channel confocal fluorescence microscopy to image biomarkers of DNA, lipids, CD45, and Cytokeratin. Using custom software, we segmented our confocal images into regions of interest consisting of individual cells and computed the image metrics of total signal, second spatial moment, spatial frequency second moment, and the product of the spatial-spatial frequency moments. We present our analysis of these 16 features. The best performing of the 16 features produced an average separation of three standard deviations between D + and D - and an average detectable rarity of ∼1 in 200. We performed multivariable regression and feature selection to combine multiple features for increased performance and showed an average separation of seven standard deviations between the D + and D - populations making our average detectable rarity of ∼1 in 480. Histograms and receiver operating characteristics (ROC) curves for these features and regressions are presented. We conclude that simple regression analysis holds promise to further improve the separation of rare cells in cytometry applications. © 2017 International Society for Advancement of Cytometry. © 2017 International Society for Advancement of Cytometry.

  15. A Small Range Six-Axis Accelerometer Designed with High Sensitivity DCB Elastic Element

    PubMed Central

    Sun, Zhibo; Liu, Jinhao; Yu, Chunzhan; Zheng, Yili

    2016-01-01

    This paper describes a small range six-axis accelerometer (the measurement range of the sensor is ±g) with high sensitivity DCB (Double Cantilever Beam) elastic element. This sensor is developed based on a parallel mechanism because of the reliability. The accuracy of sensors is affected by its sensitivity characteristics. To improve the sensitivity, a DCB structure is applied as the elastic element. Through dynamic analysis, the dynamic model of the accelerometer is established using the Lagrange equation, and the mass matrix and stiffness matrix are obtained by a partial derivative calculation and a conservative congruence transformation, respectively. By simplifying the structure of the accelerometer, a model of the free vibration is achieved, and the parameters of the sensor are designed based on the model. Through stiffness analysis of the DCB structure, the deflection curve of the beam is calculated. Compared with the result obtained using a finite element analysis simulation in ANSYS Workbench, the coincidence rate of the maximum deflection is 89.0% along the x-axis, 88.3% along the y-axis and 87.5% along the z-axis. Through strain analysis of the DCB elastic element, the sensitivity of the beam is obtained. According to the experimental result, the accuracy of the theoretical analysis is found to be 90.4% along the x-axis, 74.9% along the y-axis and 78.9% along the z-axis. The measurement errors of linear accelerations ax, ay and az in the experiments are 2.6%, 0.6% and 1.31%, respectively. The experiments prove that accelerometer with DCB elastic element performs great sensitive and precision characteristics. PMID:27657089

  16. Fluorescence spectroscopy for neoplasms control

    NASA Astrophysics Data System (ADS)

    Bratchenko, I. A.; Kristoforova, Yu. A.; Myakinin, O. O.; Artemyev, D. N.; Kozlov, S. V.; Moryatov, A. A.; Zakharov, V. P.

    2016-04-01

    Investigation of malignant skin tumors diagnosis was performed involving two setups for native tissues fluorescence control in visible and near infrared regions. Combined fluorescence analysis for skin malignant melanomas and basal cell carcinomas was performed. Autofluorescence spectra of normal skin and oncological pathologies stimulated by 457 nm and 785 nm lasers were registered for 74 skin tissue samples. Spectra of 10 melanomas and 27 basal cell carcinomas were registered ex vivo. Skin tumors analysis was made on the basis of autofluorescence spectra intensity and curvature for analysis of porphyrins, lipo-pigments, flavins and melanin. Separation of melanomas and basal cell carcinomas was performed on the basis of discriminant analysis. Overall accuracy of basal cell carcinomas and malignant melanomas separation in current study reached 86.5% with 70% sensitivity and 92.6% specificity.

  17. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  18. Simultaneous Determination of Eight Hypotensive Drugs of Various Chemical Groups in Pharmaceutical Preparations by HPLC-DAD.

    PubMed

    Stolarczyk, Mariusz; Hubicka, Urszula; Żuromska-Witek, Barbara; Krzek, Jan

    2015-01-01

    A new sensitive, simple, rapid, and precise HPLC method with diode array detection has been developed for separation and simultaneous determination of hydrochlorothiazide, furosemide, torasemide, losartane, quinapril, valsartan, spironolactone, and canrenone in combined pharmaceutical dosage forms. The chromatographic analysis of the tested drugs was performed on an ACE C18, 100 Å, 250×4.6 mm, 5 μm particle size column with 0.0.05 M phosphate buffer (pH=3.00)-acetonitrile-methanol (30+20+50 v/v/v) mobile phase at a flow rate of 1.0 mL/min. The column was thermostatted at 25°C. UV detection was performed at 230 nm. Analysis time was 10 min. The elaborated method meets the acceptance criteria for specificity, linearity, sensitivity, accuracy, and precision. The proposed method was successfully applied for the determination of the studied drugs in the selected combined dosage forms.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Lijuan; Gonder, Jeff; Burton, Evan

    This study evaluates the costs and benefits associated with the use of a stationary-wireless- power-transfer-enabled plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep was performed over many different battery sizes, charging power levels, and number/location of bus stop charging stations. The net present cost was calculated for each vehicle design and provided the basis for design evaluation. In all cases, given the assumed economic conditions, the conventional bus achieved the lowest net present cost while the optimal plug-in hybrid electric bus scenario beat out the hybridmore » electric comparison scenario. The study also performed parameter sensitivity analysis under favorable and high unfavorable market penetration assumptions. The analysis identifies fuel saving opportunities with plug-in hybrid electric bus scenarios at cumulative net present costs not too dissimilar from those for conventional buses.« less

  20. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  1. Simple and sensitive analysis of long-chain free fatty acids in milk by fluorogenic derivatization and high-performance liquid chromatography.

    PubMed

    Lu, Chi-Yu; Wu, Hsin-Lung; Chen, Su-Hwei; Kou, Hwang-Shang; Wu, Shou-Mei

    2002-01-02

    A highly sensitive high-performance liquid chromatography (HPLC) method is described for the simultaneous determination of some important saturated and unsaturated fatty acids in milk, including lauric (dodecanoic), myristic (tetradecanoic), palmitic (hexadecanoic), stearic (octadecanoic), palmitoleic (hexadecenoic), oleic (octadecenoic), and linoleic acids (octadecadienoic acids). The fatty acids were fluorogenically derivatized with 2-(2-naphthoxy)ethyl 2-(piperidino)ethanesulfonate (NOEPES) as their naphthoxyethyl derivatives. The resulting derivatives were separated by isocratic HPLC and monitored with a fluorometric detector (lambdaex = 235 nm, lambdaem = 350 nm). The fatty acids in milk were extracted with toluene, and the extract with the fatty acids was directly derivatized with NOEPES without solvent replacement. Determination of long-chain free fatty acids in milk is feasible by a standard addition method. A small amount of milk product, 10 microL, is sufficient for the analysis.

  2. A retrospective analysis to identify the factors affecting infection in patients undergoing chemotherapy.

    PubMed

    Park, Ji Hyun; Kim, Hyeon-Young; Lee, Hanna; Yun, Eun Kyoung

    2015-12-01

    This study compares the performance of the logistic regression and decision tree analysis methods for assessing the risk factors for infection in cancer patients undergoing chemotherapy. The subjects were 732 cancer patients who were receiving chemotherapy at K university hospital in Seoul, Korea. The data were collected between March 2011 and February 2013 and were processed for descriptive analysis, logistic regression and decision tree analysis using the IBM SPSS Statistics 19 and Modeler 15.1 programs. The most common risk factors for infection in cancer patients receiving chemotherapy were identified as alkylating agents, vinca alkaloid and underlying diabetes mellitus. The logistic regression explained 66.7% of the variation in the data in terms of sensitivity and 88.9% in terms of specificity. The decision tree analysis accounted for 55.0% of the variation in the data in terms of sensitivity and 89.0% in terms of specificity. As for the overall classification accuracy, the logistic regression explained 88.0% and the decision tree analysis explained 87.2%. The logistic regression analysis showed a higher degree of sensitivity and classification accuracy. Therefore, logistic regression analysis is concluded to be the more effective and useful method for establishing an infection prediction model for patients undergoing chemotherapy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Application of Anaerobic Digestion Model No. 1 for simulating anaerobic mesophilic sludge digestion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendes, Carlos, E-mail: carllosmendez@gmail.com; Esquerre, Karla, E-mail: karlaesquerre@ufba.br; Matos Queiroz, Luciano, E-mail: lmqueiroz@ufba.br

    2015-01-15

    Highlights: • The behavior of a anaerobic reactor was evaluated through modeling. • Parametric sensitivity analysis was used to select most sensitive of the ADM1. • The results indicate that the ADM1 was able to predict the experimental results. • Organic load rate above of 35 kg/m{sup 3} day affects the performance of the process. - Abstract: Improving anaerobic digestion of sewage sludge by monitoring common indicators such as volatile fatty acids (VFAs), gas composition and pH is a suitable solution for better sludge management. Modeling is an important tool to assess and to predict process performance. The present studymore » focuses on the application of the Anaerobic Digestion Model No. 1 (ADM1) to simulate the dynamic behavior of a reactor fed with sewage sludge under mesophilic conditions. Parametric sensitivity analysis is used to select the most sensitive ADM1 parameters for estimation using a numerical procedure while other parameters are applied without any modification to the original values presented in the ADM1 report. The results indicate that the ADM1 model after parameter estimation was able to predict the experimental results of effluent acetate, propionate, composites and biogas flows and pH with reasonable accuracy. The simulation of the effect of organic shock loading clearly showed that an organic shock loading rate above of 35 kg/m{sup 3} day affects the performance of the reactor. The results demonstrate that simulations can be helpful to support decisions on predicting the anaerobic digestion process of sewage sludge.« less

  4. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    PubMed

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Effect of Metformin on Plasma Fibrinogen Concentrations: A Systematic Review and Meta-Analysis of Randomized Placebo-Controlled Trials.

    PubMed

    Simental-Mendia, Luis E; Pirro, Matteo; Atkin, Stephen L; Banach, Maciej; Mikhailidis, Dimitri P; Sahebkar, Amirhossein

    2018-01-01

    Fibrinogen is a key mediator of thrombosis and it has been implicated in the pathogenesis of atherosclerosis. Because metformin has shown a potential protective effect on different atherothrombotic risk factors, we assessed in this meta-analysis its effect on plasma fibrinogen concentrations. A systematic review and meta-analysis was carried out to identify randomized placebo-controlled trials evaluating the effect of metformin administration on fibrinogen levels. The search included PubMed-Medline, Scopus, ISI Web of Knowledge and Google Scholar databases (by June 2, 2017) and quality of studies was performed according to Cochrane criteria. Quantitative data synthesis was conducted using a random-effects model and sensitivity analysis by the leave-one-out method. Meta-regression analysis was performed to assess the modifiers of treatment response. Meta-analysis of data from 9 randomized placebo-controlled clinical trials with 2302 patients comprising 10 treatment arms did not suggest a significant change in plasma fibrinogen concentrations following metformin therapy (WMD: -0.25 g/L, 95% CI: -0.53, 0.04, p = 0.092). The effect size was robust in the leave-one-out sensitivity analysis and remained non-significant after omission of each single study from the meta-analysis. No significant effect of metformin on plasma fibrinogen concentrations was demonstrated in the current meta-analysis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  6. Acoustic resonance of outer-rotor brushless dc motor for air-conditioner fan

    NASA Astrophysics Data System (ADS)

    Lee, Hong-Joo; Chung, Shi-Uk; Hwang, Sang-Moon

    2008-04-01

    Generation of acoustic noise in electric motor is an interacting combination of mechanical and electromagnetic sources. In this paper, a brushless dc motor for air-conditioner fan is analyzed by finite element method to identify noise source, and the analysis results are verified by experiments, and sensitivity analysis is performed by design of experiments.

  7. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  8. A novel bi-level meta-analysis approach: applied to biological pathway analysis.

    PubMed

    Nguyen, Tin; Tagett, Rebecca; Donato, Michele; Mitrea, Cristina; Draghici, Sorin

    2016-02-01

    The accumulation of high-throughput data in public repositories creates a pressing need for integrative analysis of multiple datasets from independent experiments. However, study heterogeneity, study bias, outliers and the lack of power of available methods present real challenge in integrating genomic data. One practical drawback of many P-value-based meta-analysis methods, including Fisher's, Stouffer's, minP and maxP, is that they are sensitive to outliers. Another drawback is that, because they perform just one statistical test for each individual experiment, they may not fully exploit the potentially large number of samples within each study. We propose a novel bi-level meta-analysis approach that employs the additive method and the Central Limit Theorem within each individual experiment and also across multiple experiments. We prove that the bi-level framework is robust against bias, less sensitive to outliers than other methods, and more sensitive to small changes in signal. For comparative analysis, we demonstrate that the intra-experiment analysis has more power than the equivalent statistical test performed on a single large experiment. For pathway analysis, we compare the proposed framework versus classical meta-analysis approaches (Fisher's, Stouffer's and the additive method) as well as against a dedicated pathway meta-analysis package (MetaPath), using 1252 samples from 21 datasets related to three human diseases, acute myeloid leukemia (9 datasets), type II diabetes (5 datasets) and Alzheimer's disease (7 datasets). Our framework outperforms its competitors to correctly identify pathways relevant to the phenotypes. The framework is sufficiently general to be applied to any type of statistical meta-analysis. The R scripts are available on demand from the authors. sorin@wayne.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Retracted: Association of ACE I/D gene polymorphism with T2DN susceptibility and the risk of T2DM developing into T2DN in a Caucasian population.

    PubMed

    Liu, Guohui; Zhou, Tian-Biao; Jiang, Zongpei; Zheng, Dongwen

    2015-03-01

    The association of the angiotensin-converting enzyme (ACE) insertion/deletion (I/D) gene polymorphism with type-2 diabetic nephropathy (T2DN) susceptibility and the risk of type-2 diabetes mellitus (T2DM) developing into T2DN in Caucasian populations is still controversial. A meta-analysis was performed to evaluate the association of ACE I/D gene polymorphism with T2DN susceptibility and the risk of T2DM developing into T2DN in Caucasian populations. A predefined literature search and selection of eligible relevant studies were performed to collect data from electronic databases. Sixteen articles were identified for the analysis of the association of ACE I/D gene polymorphism with T2DN susceptibility and the risk of T2DM developing into T2DN in Caucasian populations. ACE I/D gene polymorphism was not associated with T2DN susceptibility and the risk of patients with T2DM developing T2DN in Caucasian populations. Sensitivity analysis according to sample size of case (<100 vs. ≥100) was also performed, and the results were similar to the non-sensitivity analysis. ACE I/D gene polymorphism was not associated with T2DN susceptibility and the risk of patients with T2DM developing T2DN in Caucasian populations. However, more studies should be performed in the future. © The Author(s) 2014.

  10. Integrated Modeling Activities for the James Webb Space Telescope: Structural-Thermal-Optical Analysis

    NASA Technical Reports Server (NTRS)

    Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.; Parrish, Keith A.; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.

    2004-01-01

    The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. This is a continuation of a series of papers on modeling activities for JWST. The structural-thermal-optical, often referred to as STOP, analysis process is used to predict the effect of thermal distortion on optical performance. The benchmark STOP analysis for JWST assesses the effect of an observatory slew on wavefront error. Temperatures predicted using geometric and thermal math models are mapped to a structural finite element model in order to predict thermally induced deformations. Motions and deformations at optical surfaces are then input to optical models, and optical performance is predicted using either an optical ray trace or a linear optical analysis tool. In addition to baseline performance predictions, a process for performing sensitivity studies to assess modeling uncertainties is described.

  11. Quantile regression in the presence of monotone missingness with sensitivity analysis

    PubMed Central

    Liu, Minzhao; Daniels, Michael J.; Perri, Michael G.

    2016-01-01

    In this paper, we develop methods for longitudinal quantile regression when there is monotone missingness. In particular, we propose pattern mixture models with a constraint that provides a straightforward interpretation of the marginal quantile regression parameters. Our approach allows sensitivity analysis which is an essential component in inference for incomplete data. To facilitate computation of the likelihood, we propose a novel way to obtain analytic forms for the required integrals. We conduct simulations to examine the robustness of our approach to modeling assumptions and compare its performance to competing approaches. The model is applied to data from a recent clinical trial on weight management. PMID:26041008

  12. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  13. Sensitivity to Mental Effort and Test-Retest Reliability of Heart Rate Variability Measures in Healthy Seniors

    PubMed Central

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.

    2011-01-01

    Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665

  14. Diagnostic performance of a computer-assisted diagnosis system for bone scintigraphy of newly developed skeletal metastasis in prostate cancer patients: search for low-sensitivity subgroups.

    PubMed

    Koizumi, Mitsuru; Motegi, Kazuki; Koyama, Masamichi; Terauchi, Takashi; Yuasa, Takeshi; Yonese, Junji

    2017-08-01

    The computer-assisted diagnostic system for bone scintigraphy (BS) BONENAVI is used to evaluate skeletal metastasis. We investigated its diagnostic performance in prostate cancer patients with and without skeletal metastasis and searched for the problems. An artificial neural network (ANN) value was calculated in 226 prostate cancer patients (124 with skeletal metastasis and 101 without) using BS. Receiver operating characteristic curve analysis was performed and the sensitivity and specificity determined (cutoff ANN = 0.5). Patient's situation at the time of diagnosis of skeletal metastasis, computed tomography (CT) type, extent of disease (EOD), and BS uptake grade were analyzed. False-negative and false-positive results were recorded. BONENAVI showed 82% (102/124) of sensitivity and 83% (84/101) specificity for metastasis detection. There were no significant differences among CT types, although low EOD and faint BS uptake were associated with low ANN values and low sensitivity. Patients showed lower sensitivity during the follow-up period than staging work-up. False-negative lesions were often located in the pelvis or adjacent to it. They comprised not only solitary, faint BS lesions but also overlaying to urinary excretion. BONENAVI with BS has good sensitivity and specificity for detecting prostate cancer's osseous metastasis. Low EOD and faint BS uptake are associated with low sensitivity but not the CT type. Prostate cancer patients likely to have false-negative results during the follow-up period had a solitary lesion in the pelvis with faint BS uptake or lesions overlaying to urinary excretion.

  15. Hamiltonian Markov Chain Monte Carlo Methods for the CUORE Neutrinoless Double Beta Decay Sensitivity

    NASA Astrophysics Data System (ADS)

    Graham, Eleanor; Cuore Collaboration

    2017-09-01

    The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.

  16. Quadrant photodetector sensitivity.

    PubMed

    Manojlović, Lazo M

    2011-07-10

    A quantitative theoretical analysis of the quadrant photodetector (QPD) sensitivity in position measurement is presented. The Gaussian light spot irradiance distribution on the QPD surface was assumed to meet most of the real-life applications of this sensor. As the result of the mathematical treatment of the problem, we obtained, in a closed form, the sensitivity function versus the ratio of the light spot 1/e radius and the QPD radius. The obtained result is valid for the full range of the ratios. To check the influence of the finite light spot radius on the interaxis cross talk and linearity, we also performed a mathematical analysis to quantitatively measure these types of errors. An optimal range of the ratio of light spot radius and QPD radius has been found to simultaneously achieve low interaxis cross talk and high linearity of the sensor. © 2011 Optical Society of America

  17. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  18. Diagnostic performance of contrast-enhanced spectral mammography: Systematic review and meta-analysis.

    PubMed

    Tagliafico, Alberto Stefano; Bignotti, Bianca; Rossi, Federica; Signori, Alessio; Sormani, Maria Pia; Valdora, Francesca; Calabrese, Massimo; Houssami, Nehmat

    2016-08-01

    To estimate sensitivity and specificity of CESM for breast cancer diagnosis. Systematic review and meta-analysis of the accuracy of CESM in finding breast cancer in highly selected women. We estimated summary receiver operating characteristic curves, sensitivity and specificity according to quality criteria with QUADAS-2. Six hundred four studies were retrieved, 8 of these reporting on 920 patients with 994 lesions, were eligible for inclusion. Estimated sensitivity from all studies was: 0.98 (95% CI: 0.96-1.00). Specificity was estimated from six studies reporting raw data: 0.58 (95% CI: 0.38-0.77). The majority of studies were scored as at high risk of bias due to the very selected populations. CESM has a high sensitivity but very low specificity. The source studies were based on highly selected case series and prone to selection bias. High-quality studies are required to assess the accuracy of CESM in unselected cases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Bayesian Estimation of the True Prevalence and of the Diagnostic Test Sensitivity and Specificity of Enteropathogenic Yersinia in Finnish Pig Serum Samples.

    PubMed

    Vilar, M J; Ranta, J; Virtanen, S; Korkeala, H

    2015-01-01

    Bayesian analysis was used to estimate the pig's and herd's true prevalence of enteropathogenic Yersinia in serum samples collected from Finnish pig farms. The sensitivity and specificity of the diagnostic test were also estimated for the commercially available ELISA which is used for antibody detection against enteropathogenic Yersinia. The Bayesian analysis was performed in two steps; the first step estimated the prior true prevalence of enteropathogenic Yersinia with data obtained from a systematic review of the literature. In the second step, data of the apparent prevalence (cross-sectional study data), prior true prevalence (first step), and estimated sensitivity and specificity of the diagnostic methods were used for building the Bayesian model. The true prevalence of Yersinia in slaughter-age pigs was 67.5% (95% PI 63.2-70.9). The true prevalence of Yersinia in sows was 74.0% (95% PI 57.3-82.4). The estimates of sensitivity and specificity values of the ELISA were 79.5% and 96.9%.

  20. Structural, optical and photovoltaic properties of co-doped CdTe QDs for quantum dots sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Ayyaswamy, Arivarasan; Ganapathy, Sasikala; Alsalme, Ali; Alghamdi, Abdulaziz; Ramasamy, Jayavel

    2015-12-01

    Zinc and sulfur alloyed CdTe quantum dots (QDs) sensitized TiO2 photoelectrodes have been fabricated for quantum dots sensitized solar cells. Alloyed CdTe QDs were prepared in aqueous phase using mercaptosuccinic acid (MSA) as a capping agent. The influence of co-doping on the structural property of CdTe QDs was studied by XRD analysis. The enhanced optical absorption of alloyed CdTe QDs was studied using UV-vis absorption and fluorescence emission spectra. The capping of MSA molecules over CdTe QDs was confirmed by the FTIR and XPS analyses. Thermogravimetric analysis confirms that the prepared QDs were thermally stable up to 600 °C. The photovoltaic performance of alloyed CdTe QDs sensitized TiO2 photoelectrodes were studied using J-V characteristics under the illumination of light with 1 Sun intensity. These results show the highest photo conversion efficiency of η = 1.21%-5% Zn & S alloyed CdTe QDs.

  1. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  2. Measuring information processing in a client with extreme agitation following traumatic brain injury using the Perceive, Recall, Plan and Perform System of Task Analysis.

    PubMed

    Nott, Melissa T; Chapparo, Christine

    2008-09-01

    Agitation following traumatic brain injury is characterised by a heightened state of activity with disorganised information processing that interferes with learning and achieving functional goals. This study aimed to identify information processing problems during task performance of a severely agitated adult using the Perceive, Recall, Plan and Perform (PRPP) System of Task Analysis. Second, this study aimed to examine the sensitivity of the PRPP System to changes in task performance over a short period of rehabilitation, and third, to evaluate the guidance provided by the PRPP in directing intervention. A case study research design was employed. The PRPP System of Task Analysis was used to assess changes in task embedded information processing capacity during occupational therapy intervention with a severely agitated adult in a rehabilitation context. Performance is assessed on three selected tasks over a one-month period. Information processing difficulties during task performance can be clearly identified when observing a severely agitated adult following a traumatic brain injury. Processing skills involving attention, sensory processing and planning were most affected at this stage of rehabilitation. These processing difficulties are linked to established descriptions of agitated behaviour. Fluctuations in performance across three tasks of differing processing complexity were evident, leading to hypothesised relationships between task complexity, environment and novelty with information processing errors. Changes in specific information processing capacity over time were evident based on repeated measures using the PRPP System of Task Analysis. This lends preliminary support for its utility as an outcome measure, and raises hypotheses about the type of therapy required to enhance information processing in people with severe agitation. The PRPP System is sensitive to information processing changes in severely agitated adults when used to reassess performance over short intervals and can provide direct guidance to occupational therapy intervention to improve task embedded information processing by categorising errors under four stages of an information processing model: Perceive, Recall, Plan and Perform.

  3. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  4. Rapid and sensitive analysis of phthalate metabolites, bisphenol A, and endogenous steroid hormones in human urine by mixed-mode solid-phase extraction, dansylation, and ultra-performance liquid chromatography coupled with triple quadrupole mass spectrometry.

    PubMed

    Wang, He-xing; Wang, Bin; Zhou, Ying; Jiang, Qing-wu

    2013-05-01

    Steroid hormone levels in human urine are convenient and sensitive indicators for the impact of phthalates and/or bisphenol A (BPA) exposure on the human steroid hormone endocrine system. In this study, a rapid and sensitive method for determination of 14 phthalate metabolites, BPA, and ten endogenous steroid hormones in urine was developed and validated on the basis of ultra-performance liquid chromatography coupled with electrospray ionization triple quadrupole mass spectrometry. The optimized mixed-mode solid phase-extraction separated the weakly acidic or neutral BPA and steroid hormones from acidic phthalate metabolites in urine: the former were determined in positive ion mode with a methanol/water mobile phase containing 10 mM ammonium formate; the latter were determined in negative ion mode with a acetonitrile/water mobile phase containing 0.1 % acetic acid, which significantly alleviated matrix effects for the analysis of BPA and steroid hormones. Dansylation of estrogens and BPA realized simultaneous and sensitive analysis of the endogenous steroid hormones and BPA in a single chromatographic run. The limits of detection were less than 0.84 ng/mL for phthalate metabolites and less than 0.22 ng/mL for endogenous steroid hormones and BPA. This proposed method had satisfactory precision and accuracy, and was successfully applied to the analyses of human urine samples. This method could be valuable when investigating the associations among endocrine-disrupting chemicals, endogenous steroid hormones, and relevant adverse outcomes in epidemiological studies.

  5. Microfluidic single-cell whole-transcriptome sequencing.

    PubMed

    Streets, Aaron M; Zhang, Xiannian; Cao, Chen; Pang, Yuhong; Wu, Xinglong; Xiong, Liang; Yang, Lu; Fu, Yusi; Zhao, Liang; Tang, Fuchou; Huang, Yanyi

    2014-05-13

    Single-cell whole-transcriptome analysis is a powerful tool for quantifying gene expression heterogeneity in populations of cells. Many techniques have, thus, been recently developed to perform transcriptome sequencing (RNA-Seq) on individual cells. To probe subtle biological variation between samples with limiting amounts of RNA, more precise and sensitive methods are still required. We adapted a previously developed strategy for single-cell RNA-Seq that has shown promise for superior sensitivity and implemented the chemistry in a microfluidic platform for single-cell whole-transcriptome analysis. In this approach, single cells are captured and lysed in a microfluidic device, where mRNAs with poly(A) tails are reverse-transcribed into cDNA. Double-stranded cDNA is then collected and sequenced using a next generation sequencing platform. We prepared 94 libraries consisting of single mouse embryonic cells and technical replicates of extracted RNA and thoroughly characterized the performance of this technology. Microfluidic implementation increased mRNA detection sensitivity as well as improved measurement precision compared with tube-based protocols. With 0.2 M reads per cell, we were able to reconstruct a majority of the bulk transcriptome with 10 single cells. We also quantified variation between and within different types of mouse embryonic cells and found that enhanced measurement precision, detection sensitivity, and experimental throughput aided the distinction between biological variability and technical noise. With this work, we validated the advantages of an early approach to single-cell RNA-Seq and showed that the benefits of combining microfluidic technology with high-throughput sequencing will be valuable for large-scale efforts in single-cell transcriptome analysis.

  6. Supersonic molecular beam-hyperthermal surface ionisation coupled with time-of-flight mass spectrometry applied to trace level detection of polynuclear aromatic hydrocarbons in drinking water for reduced sample preparation and analysis time.

    PubMed

    Davis, S C; Makarov, A A; Hughes, J D

    1999-01-01

    Analysis of sub-ppb levels of polynuclear aromatic hydrocarbons (PAHs) in drinking water by high performance liquid chromatography (HPLC) fluorescence detection typically requires large water samples and lengthy extraction procedures. The detection itself, although selective, does not give compound identity confirmation. Benchtop gas chromatography/mass spectrometry (GC/MS) systems operating in the more sensitive selected ion monitoring (SIM) acquisition mode discard spectral information and, when operating in scanning mode, are less sensitive and scan too slowly. The selectivity of hyperthermal surface ionisation (HSI), the high column flow rate capacity of the supersonic molecular beam (SMB) GC/MS interface, and the high acquisition rate of time-of-flight (TOF) mass analysis, are combined here to facilitate a rapid, specific and sensitive technique for the analysis of trace levels of PAHs in water. This work reports the advantages gained by using the GC/HSI-TOF system over the HPLC fluorescence method, and discusses in some detail the nature of the instrumentation used.

  7. FLOCK cluster analysis of mast cell event clustering by high-sensitivity flow cytometry predicts systemic mastocytosis.

    PubMed

    Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty

    2015-11-01

    In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.

  8. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  9. The effect of uncertainties in distance-based ranking methods for multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Jaini, Nor I.; Utyuzhnikov, Sergei V.

    2017-08-01

    Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.

  10. Parameter sensitivity analysis of a 1-D cold region lake model for land-surface schemes

    NASA Astrophysics Data System (ADS)

    Guerrero, José-Luis; Pernica, Patricia; Wheater, Howard; Mackay, Murray; Spence, Chris

    2017-12-01

    Lakes might be sentinels of climate change, but the uncertainty in their main feedback to the atmosphere - heat-exchange fluxes - is often not considered within climate models. Additionally, these fluxes are seldom measured, hindering critical evaluation of model output. Analysis of the Canadian Small Lake Model (CSLM), a one-dimensional integral lake model, was performed to assess its ability to reproduce diurnal and seasonal variations in heat fluxes and the sensitivity of simulated fluxes to changes in model parameters, i.e., turbulent transport parameters and the light extinction coefficient (Kd). A C++ open-source software package, Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), was used to perform sensitivity analysis (SA) and identify the parameters that dominate model behavior. The generalized likelihood uncertainty estimation (GLUE) was applied to quantify the fluxes' uncertainty, comparing daily-averaged eddy-covariance observations to the output of CSLM. Seven qualitative and two quantitative SA methods were tested, and the posterior likelihoods of the modeled parameters, obtained from the GLUE analysis, were used to determine the dominant parameters and the uncertainty in the modeled fluxes. Despite the ubiquity of the equifinality issue - different parameter-value combinations yielding equivalent results - the answer to the question was unequivocal: Kd, a measure of how much light penetrates the lake, dominates sensible and latent heat fluxes, and the uncertainty in their estimates is strongly related to the accuracy with which Kd is determined. This is important since accurate and continuous measurements of Kd could reduce modeling uncertainty.

  11. Parameters Estimation For A Patellofemoral Joint Of A Human Knee Using A Vector Method

    NASA Astrophysics Data System (ADS)

    Ciszkiewicz, A.; Knapczyk, J.

    2015-08-01

    Position and displacement analysis of a spherical model of a human knee joint using the vector method was presented. Sensitivity analysis and parameter estimation were performed using the evolutionary algorithm method. Computer simulations for the mechanism with estimated parameters proved the effectiveness of the prepared software. The method itself can be useful when solving problems concerning the displacement and loads analysis in the knee joint.

  12. Gamma Ray Observatory (GRO) OBC attitude error analysis

    NASA Technical Reports Server (NTRS)

    Harman, R. R.

    1990-01-01

    This analysis involves an in-depth look into the onboard computer (OBC) attitude determination algorithm. A review of TRW error analysis and necessary ground simulations to understand the onboard attitude determination process are performed. In addition, a plan is generated for the in-flight calibration and validation of OBC computed attitudes. Pre-mission expected accuracies are summarized and sensitivity of onboard algorithms to sensor anomalies and filter tuning parameters are addressed.

  13. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  14. Uncertainty Reduction using Bayesian Inference and Sensitivity Analysis: A Sequential Approach to the NASA Langley Uncertainty Quantification Challenge

    NASA Technical Reports Server (NTRS)

    Sankararaman, Shankar

    2016-01-01

    This paper presents a computational framework for uncertainty characterization and propagation, and sensitivity analysis under the presence of aleatory and epistemic un- certainty, and develops a rigorous methodology for efficient refinement of epistemic un- certainty by identifying important epistemic variables that significantly affect the overall performance of an engineering system. The proposed methodology is illustrated using the NASA Langley Uncertainty Quantification Challenge (NASA-LUQC) problem that deals with uncertainty analysis of a generic transport model (GTM). First, Bayesian inference is used to infer subsystem-level epistemic quantities using the subsystem-level model and corresponding data. Second, tools of variance-based global sensitivity analysis are used to identify four important epistemic variables (this limitation specified in the NASA-LUQC is reflective of practical engineering situations where not all epistemic variables can be refined due to time/budget constraints) that significantly affect system-level performance. The most significant contribution of this paper is the development of the sequential refine- ment methodology, where epistemic variables for refinement are not identified all-at-once. Instead, only one variable is first identified, and then, Bayesian inference and global sensi- tivity calculations are repeated to identify the next important variable. This procedure is continued until all 4 variables are identified and the refinement in the system-level perfor- mance is computed. The advantages of the proposed sequential refinement methodology over the all-at-once uncertainty refinement approach are explained, and then applied to the NASA Langley Uncertainty Quantification Challenge problem.

  15. Probabilistic risk assessment for a loss of coolant accident in McMaster Nuclear Reactor and application of reliability physics model for modeling human reliability

    NASA Astrophysics Data System (ADS)

    Ha, Taesung

    A probabilistic risk assessment (PRA) was conducted for a loss of coolant accident, (LOCA) in the McMaster Nuclear Reactor (MNR). A level 1 PRA was completed including event sequence modeling, system modeling, and quantification. To support the quantification of the accident sequence identified, data analysis using the Bayesian method and human reliability analysis (HRA) using the accident sequence evaluation procedure (ASEP) approach were performed. Since human performance in research reactors is significantly different from that in power reactors, a time-oriented HRA model (reliability physics model) was applied for the human error probability (HEP) estimation of the core relocation. This model is based on two competing random variables: phenomenological time and performance time. The response surface and direct Monte Carlo simulation with Latin Hypercube sampling were applied for estimating the phenomenological time, whereas the performance time was obtained from interviews with operators. An appropriate probability distribution for the phenomenological time was assigned by statistical goodness-of-fit tests. The human error probability (HEP) for the core relocation was estimated from these two competing quantities: phenomenological time and operators' performance time. The sensitivity of each probability distribution in human reliability estimation was investigated. In order to quantify the uncertainty in the predicted HEPs, a Bayesian approach was selected due to its capability of incorporating uncertainties in model itself and the parameters in that model. The HEP from the current time-oriented model was compared with that from the ASEP approach. Both results were used to evaluate the sensitivity of alternative huinan reliability modeling for the manual core relocation in the LOCA risk model. This exercise demonstrated the applicability of a reliability physics model supplemented with a. Bayesian approach for modeling human reliability and its potential usefulness of quantifying model uncertainty as sensitivity analysis in the PRA model.

  16. High Sensitivity and High Detection Specificity of Gold-Nanoparticle-Grafted Nanostructured Silicon Mass Spectrometry for Glucose Analysis.

    PubMed

    Tsao, Chia-Wen; Yang, Zhi-Jie

    2015-10-14

    Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.

  17. A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia

    NASA Astrophysics Data System (ADS)

    Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM

    2017-07-01

    Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.

  18. Surrogate models for efficient stability analysis of brake systems

    NASA Astrophysics Data System (ADS)

    Nechak, Lyes; Gillot, Frédéric; Besset, Sébastien; Sinou, Jean-Jacques

    2015-07-01

    This study assesses capacities of the global sensitivity analysis combined together with the kriging formalism to be useful in the robust stability analysis of brake systems, which is too costly when performed with the classical complex eigenvalues analysis (CEA) based on finite element models (FEMs). By considering a simplified brake system, the global sensitivity analysis is first shown very helpful for understanding the effects of design parameters on the brake system's stability. This is allowed by the so-called Sobol indices which discriminate design parameters with respect to their influence on the stability. Consequently, only uncertainty of influent parameters is taken into account in the following step, namely, the surrogate modelling based on kriging. The latter is then demonstrated to be an interesting alternative to FEMs since it allowed, with a lower cost, an accurate estimation of the system's proportions of instability corresponding to the influent parameters.

  19. Diagnostic performance of random urine samples using albumin concentration vs ratio of albumin to creatinine for microalbuminuria screening in patients with diabetes mellitus: a systematic review and meta-analysis.

    PubMed

    Wu, Hon-Yen; Peng, Yu-Sen; Chiang, Chih-Kang; Huang, Jenq-Wen; Hung, Kuan-Yu; Wu, Kwan-Dun; Tu, Yu-Kang; Chien, Kuo-Liong

    2014-07-01

    A random urine sample measuring the albumin concentration (UAC) without simultaneously measuring the urinary creatinine is less expensive than measuring the ratio of albumin to creatinine (ACR), but comparisons of their diagnostic performance for microalbuminuria screening among patients with diabetes mellitus (DM) have not been undertaken in previous meta-analyses. To compare the diagnostic performance of the UAC vs the ACR in random urine samples for microalbuminuria screening among patients with DM. Electronic literature searches of PubMed, MEDLINE, and Scopus for English-language publications from the earliest available date of indexing through July 31, 2012. Clinical studies assessing the UAC or the ACR of random urine samples in detecting the presence of microalbuminuria among patients with DM using a urinary albumin excretion rate of 30 to 300 mg/d in 24-hour timed urine collections as the criterion standard. Bivariate random-effects models for analysis and pooling of the diagnostic performance measures across studies, as well as comparisons between different screening tests. The primary end point was the diagnostic performance measures of the UAC or the ACR in random urine samples, as well as comparisons between them. We identified 14 studies, with a total of 2078 patients; 9 studies reported on the UAC, and 12 studies reported on the ACR. Meta-analysis showed pooled sensitivities of 0.85 and 0.87 for the UAC and the ACR, respectively, and pooled specificities of 0.88 and 0.88, respectively. No differences in sensitivity (P = .70), specificity (P = .63), or diagnostic odds ratios (P = .59) between the UAC and the ACR were found. The time point of urine collection did not affect the diagnostic performance of either test. The UAC and the ACR yielded high sensitivity and specificity for the detection of microalbuminuria. Because the diagnostic performance of the UAC is comparable to that of the ACR, our findings indicate that the UAC of random urine samples may become the screening tool of choice for the population with DM, considering the rising incidence of DM and the constrained health care resources in many countries.

  20. European Multicenter Study on Analytical Performance of DxN Veris System HCV Assay.

    PubMed

    Braun, Patrick; Delgado, Rafael; Drago, Monica; Fanti, Diana; Fleury, Hervé; Gismondo, Maria Rita; Hofmann, Jörg; Izopet, Jacques; Kühn, Sebastian; Lombardi, Alessandra; Marcos, Maria Angeles; Sauné, Karine; O'Shea, Siobhan; Pérez-Rivilla, Alfredo; Ramble, John; Trimoulet, Pascale; Vila, Jordi; Whittaker, Duncan; Artus, Alain; Rhodes, Daniel W

    2017-04-01

    The analytical performance of the Veris HCV Assay for use on the new and fully automated Beckman Coulter DxN Veris Molecular Diagnostics System (DxN Veris System) was evaluated at 10 European virology laboratories. Precision, analytical sensitivity, specificity, and performance with negative samples, linearity, and performance with hepatitis C virus (HCV) genotypes were evaluated. Precision for all sites showed a standard deviation (SD) of 0.22 log 10 IU/ml or lower for each level tested. Analytical sensitivity determined by probit analysis was between 6.2 and 9.0 IU/ml. Specificity on 94 unique patient samples was 100%, and performance with 1,089 negative samples demonstrated 100% not-detected results. Linearity using patient samples was shown from 1.34 to 6.94 log 10 IU/ml. The assay demonstrated linearity upon dilution with all HCV genotypes. The Veris HCV Assay demonstrated an analytical performance comparable to that of currently marketed HCV assays when tested across multiple European sites. Copyright © 2017 American Society for Microbiology.

  1. Renal Mass Biopsy to Guide Treatment Decisions for Small Incidental Renal Tumors: A Cost-effectiveness Analysis1

    PubMed Central

    Gervais, Debra A.; Hartman, Rebecca I.; Harisinghani, Mukesh G.; Feldman, Adam S.; Mueller, Peter R.; Gazelle, G. Scott

    2010-01-01

    Purpose: To evaluate the effectiveness, cost, and cost-effectiveness of using renal mass biopsy to guide treatment decisions for small incidentally detected renal tumors. Materials and Methods: A decision-analytic Markov model was developed to estimate life expectancy and lifetime costs for patients with small (≤4-cm) renal tumors. Two strategies were compared: renal mass biopsy to triage patients to surgery or imaging surveillance and empiric nephron-sparing surgery. The model incorporated biopsy performance, the probability of track seeding with malignant cells, the prevalence and growth of benign and malignant tumors, treatment effectiveness and costs, and patient outcomes. An incremental cost-effectiveness analysis was performed to identify strategy preference under a willingness-to-pay threshold of $75 000 per quality-adjusted life-year (QALY). Effects of changes in key parameters on strategy preference were evaluated in sensitivity analysis. Results: Under base-case assumptions, the biopsy strategy yielded a minimally greater quality-adjusted life expectancy (4 days) than did empiric surgery at a lower lifetime cost ($3466), dominating surgery from a cost-effectiveness perspective. Over the majority of parameter ranges tested in one-way sensitivity analysis, the biopsy strategy dominated surgery or was cost-effective relative to surgery based on a $75 000-per-QALY willingness-to-pay threshold. In two-way sensitivity analysis, surgery yielded greater life expectancy when the prevalence of malignancy and propensity for biopsy-negative cancers to metastasize were both higher than expected or when the sensitivity and specificity of biopsy were both lower than expected. Conclusion: The use of biopsy to guide treatment decisions for small incidentally detected renal tumors is cost-effective and can prevent unnecessary surgery in many cases. © RSNA, 2010 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10092013/-/DC1 PMID:20720070

  2. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  3. A spectral power analysis of driving behavior changes during the transition from nondistraction to distraction.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-11-17

    This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.

  4. Ultra-performance liquid chromatography tandem mass-spectrometry (uplc-ms/ms) for the rapid, simultaneous analysis of thiamin, riboflavin, flavin adenine dinucleotide, nicotinamide and pyridoxal in human milk

    USDA-ARS?s Scientific Manuscript database

    A novel, rapid and sensitive Ultra Performance Liquid-Chromatography tandem Mass-Spectrometry (UPLC-MS/MS) method for the simultaneous determination of several B-vitamins in human milk was developed. Resolution by retention time or multiple reaction monitoring (MRM) for thiamin, riboflavin, flavin a...

  5. Simulating soil moisture change in a semiarid rangeland watershed with a process-based water-balance model

    Treesearch

    Howard Evan Canfield; Vicente L. Lopes

    2000-01-01

    A process-based, simulation model for evaporation, soil water and streamflow (BROOK903) was used to estimate soil moisture change on a semiarid rangeland watershed in southeastern Arizona. A sensitivity analysis was performed to select parameters affecting ET and soil moisture for calibration. Automatic parameter calibration was performed using a procedure based on a...

  6. Space station electrical power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.; Twombly, Mark A.

    1988-01-01

    ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.

  7. Detection of AGXT bgene mutations by denaturing high-performance liquid chromatography for diagnosis of hyperoxaluria type 1.

    PubMed

    Pirulli, D; Giordano, M; Lessi, M; Spanò, A; Puzzer, D; Zezlina, S; Boniotto, M; Crovella, S; Florian, F; Marangella, M; Momigliano-Richiardi, P; Savoldi, S; Amoroso, A

    2001-06-01

    Primary hyperoxaluria type 1 is an autosomal recessive disorder of glyoxylate metabolism, caused by a deficiency of alanine:glyoxylate aminotransferase, which is encoded by a single copy gene (AGXT. The aim of this research was to standardize denaturing high-performance liquid chromatography, a new, sensitive, relatively inexpensive, and automated technique, for the detection of AGXT mutation. Denaturing high-performance liquid chromatography was used to analyze in blind the AGXT gene in 20 unrelated Italian patients with primary hyperoxaluria type I previously studied by other standard methods (single-strand conformation polymorphism analysis and direct sequencing) and 50 controls. Denaturing high-performance liquid chromatography allowed us to identify 13 mutations and the polymorphism at position 154 in exon I of the AGXT gene. Hence the method is more sensitive and less time consuming than single-strand conformation polymorphism analysis for the detection of AGXT mutations, thus representing a useful and reliable tool for detecting the mutations responsible for primary hyperoxaluria type 1. The new technology could also be helpful in the search for healthy carriers of AGXT mutations amongst family members and their partners, and for screening of AGXT polymorphisms in patients with nephrolithiasis and healthy populations.

  8. The Efficacy of Guanxinning Injection in Treating Angina Pectoris: Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha

    2013-01-01

    Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167

  9. Diagnostic performance of body mass index to identify excess body fat in children with cerebral palsy.

    PubMed

    Duran, Ibrahim; Schulze, Josefa; Martakis, KyriakoS; Stark, Christina; Schoenau, Eckhard

    2018-03-07

    To assess the diagnostic performance of body mass index (BMI) cut-off values according to recommendations of the World Health Organization (WHO), the World Obesity Federation (WOF), and the German Society for Adiposity (DAG) to identify excess body fat in children with cerebral palsy (CP). The present study was a monocentric retrospective analysis of prospectively collected data among children and adolescents with CP participating in a rehabilitation programme. Excess body fat was defined as a body fat percentage above the 85th centile assessed by dual-energy X-ray absorptiometry. In total, 329 children (181 males, 148 females) with CP were eligible for analysis. The mean age was 12 years 4 months (standard deviation 2y 9mo). The BMI cut-off values for 'overweight' according to the WHO, WOF, and DAG showed the following sensitivities and specificities for the prediction of excess body fat in our population: WHO: sensitivity 0.768 (95% confidence interval [CI] 0.636-0.870), specificity 0.894 (95% CI 0.851-0.928); WOF: sensitivity 0.696 (95% CI 0.559-0.812), specificity 0.934 (95% CI 0.898-0.960); DAG: sensitivity 0.411 (95% CI 0.281-0.550), specificity 0.993 (95% CI 0.974-0.999). Body mass index showed high specificity, but low sensitivity in children with CP. Thus, 'normal-weight obese' children with CP were overlooked, when assessing excess body fat only using BMI. Excess body fat in children with cerebral palsy (CP) is less common than previously reported. Body mass index (BMI) had high specificity but low sensitivity in detecting excess body fat in children with CP. BMI evaluation criteria of the German Society for Adiposity could be improved in children with CP. © 2018 Mac Keith Press.

  10. Time to angiographic reperfusion in acute ischemic stroke: decision analysis.

    PubMed

    Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H

    2014-12-01

    Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.

  11. Drought prediction using co-active neuro-fuzzy inference system, validation, and uncertainty analysis (case study: Birjand, Iran)

    NASA Astrophysics Data System (ADS)

    Memarian, Hadi; Pourreza Bilondi, Mohsen; Rezaei, Majid

    2016-08-01

    This work aims to assess the capability of co-active neuro-fuzzy inference system (CANFIS) for drought forecasting of Birjand, Iran through the combination of global climatic signals with rainfall and lagged values of Standardized Precipitation Index (SPI) index. Using stepwise regression and correlation analyses, the signals NINO 1 + 2, NINO 3, Multivariate Enso Index, Tropical Southern Atlantic index, Atlantic Multi-decadal Oscillation index, and NINO 3.4 were recognized as the effective signals on the drought event in Birjand. Based on the results from stepwise regression analysis and regarding the processor limitations, eight models were extracted for further processing by CANFIS. The metrics P-factor and D-factor were utilized for uncertainty analysis, based on the sequential uncertainty fitting algorithm. Sensitivity analysis showed that for all models, NINO indices and rainfall variable had the largest impact on network performance. In model 4 (as the model with the lowest error during training and testing processes), NINO 1 + 2(t-5) with an average sensitivity of 0.7 showed the highest impact on network performance. Next, the variables rainfall, NINO 1 + 2(t), and NINO 3(t-6) with the average sensitivity of 0.59, 0.28, and 0.28, respectively, could have the highest effect on network performance. The findings based on network performance metrics indicated that the global indices with a time lag represented a better correlation with El Niño Southern Oscillation (ENSO). Uncertainty analysis of the model 4 demonstrated that 68 % of the observed data were bracketed by the 95PPU and D-Factor value (0.79) was also within a reasonable range. Therefore, the fourth model with a combination of the input variables NINO 1 + 2 (with 5 months of lag and without any lag), monthly rainfall, and NINO 3 (with 6 months of lag) and correlation coefficient of 0.903 (between observed and simulated SPI) was selected as the most accurate model for drought forecasting using CANFIS in the climatic region of Birjand.

  12. Analysis of performance losses of direct ethanol fuel cells with the aid of a reference electrode

    NASA Astrophysics Data System (ADS)

    Li, Guangchun; Pickup, Peter G.

    The performances of direct ethanol fuel cells with different anode catalysts, different ethanol concentrations, and at different operating temperatures have been studied. The performance losses of the cell have been separated into individual electrode performance losses with the aid of a reference electrode, ethanol crossover has been quantified, and CO 2 and acetic acid production have been measured by titration. It has been shown that the cell performance strongly depends on the anode catalyst, ethanol concentration, and operating temperature. It was found that the cathode and anode exhibit different dependences on ethanol concentration and operating temperature. The performance of the cathode is very sensitive to the rate of ethanol crossover. Product analysis provides insights into the mechanisms of electro-oxidation of ethanol.

  13. A novel high sensitivity HPLC assay for topiramate, using 4-chloro-7-nitrobenzofurazan as pre-column fluorescence derivatizing agent.

    PubMed

    Bahrami, Gholamreza; Mohammadi, Bahareh

    2007-05-01

    A new, sensitive and simple high-performance liquid chromatographic method for analysis of topiramate, an antiepileptic agent, using 4-chloro-7-nitrobenzofurazan as pre-column derivatization agent is described. Following liquid-liquid extraction of topiramate and an internal standard (amlodipine) from human serum, derivatization of the drugs was performed by the labeling agent in the presence of dichloromethane, methanol, acetonitrile and borate buffer (0.05 M; pH 10.6). A mixture of sodium phosphate buffer (0.05 M; pH 2.4): methanol (35:65 v/v) was eluted as mobile phase and chromatographic separation was achieved using a Shimpack CLC-C18 (150 x 4.6 mm) column. In this method the limit of quantification of 0.01 microg/mL was obtained and the procedure was validated over the concentration range of 0.01 to 12.8 microg/mL. No interferences were found from commonly co-administrated antiepileptic drugs including phenytoin, phenobarbital carbamazepine, lamotrigine, zonisamide, primidone, gabapentin, vigabatrin, and ethosuximide. The analysis performance was carried-out in terms of specificity, sensitivity, linearity, precision, accuracy and stability and the method was shown to be accurate, with intra-day and inter-day accuracy from -3.4 to 10% and precise, with intra-day and inter-day precision from 1.1 to 18%.

  14. Radiation imaging apparatus

    DOEpatents

    Anger, Hal O.; Martin, Donn C.; Lampton, Michael L.

    1983-01-01

    A radiation imaging system using a charge multiplier and a position sensitive anode in the form of periodically arranged sets of interconnected anode regions for detecting the position of the centroid of a charge cloud arriving thereat from the charge multiplier. Various forms of improved position sensitive anodes having single plane electrode connections are disclosed. Various analog and digital signal processing systems are disclosed, including systems which use the fast response of microchannel plates, anodes and preamps to perform scintillation pulse height analysis digitally.

  15. A comparative study of the sensitivity of diffusion-related parameters obtained from diffusion tensor imaging, diffusional kurtosis imaging, q-space analysis and bi-exponential modelling in the early disease course (24 h) of hyperacute (6 h) ischemic stroke patients.

    PubMed

    Duchêne, Gaëtan; Peeters, Frank; Peeters, André; Duprez, Thierry

    2017-08-01

    To compare the sensitivity and early temporal changes of diffusion parameters obtained from diffusion tensor imaging (DTI), diffusional kurtosis imaging (DKI), q-space analysis (QSA) and bi-exponential modelling in hyperacute stroke patients. A single investigational acquisition allowing the four diffusion analyses was performed on seven hyperacute stroke patients with a 3T system. The percentage change between ipsi- and contralateral regions were compared at admission and 24 h later. Two out of the seven patients were imaged every 6 h during this period. Kurtoses from both DKI and QSA were the most sensitive of the tested diffusion parameters in the few hours following ischemia. An early increase-maximum-decrease pattern of evolution was highlighted during the 24-h period for all parameters proportional to diffusion coefficients. A similar pattern was observed for both kurtoses in only one of two patients. Our comparison was performed using identical diffusion encoding timings and on patients in the same stage of their condition. Although preliminary, our findings confirm those of previous studies that showed enhanced sensitivity of kurtosis. A fine time mapping of diffusion metrics in hyperacute stroke patients was presented which advocates for further investigations on larger animal or human cohorts.

  16. Electrophysiological Correlates of Individual Differences in Perception of Audiovisual Temporal Asynchrony

    PubMed Central

    Kaganovich, Natalya; Schumaker, Jennifer

    2016-01-01

    Sensitivity to the temporal relationship between auditory and visual stimuli is key to efficient audiovisual integration. However, even adults vary greatly in their ability to detect audiovisual temporal asynchrony. What underlies this variability is currently unknown. We recorded event-related potentials (ERPs) while participants performed a simultaneity judgment task on a range of audiovisual (AV) and visual-auditory (VA) stimulus onset asynchronies (SOAs) and compared ERP responses in good and poor performers to the 200 ms SOA, which showed the largest individual variability in the number of synchronous perceptions. Analysis of ERPs to the VA200 stimulus yielded no significant results. However, those individuals who were more sensitive to the AV200 SOA had significantly more positive voltage between 210 and 270 ms following the sound onset. In a follow-up analysis, we showed that the mean voltage within this window predicted approximately 36% of variability in sensitivity to AV temporal asynchrony in a larger group of participants. The relationship between the ERP measure in the 210-270 ms window and accuracy on the simultaneity judgment task also held for two other AV SOAs with significant individual variability - 100 and 300 ms. Because the identified window was time-locked to the onset of sound in the AV stimulus, we conclude that sensitivity to AV temporal asynchrony is shaped to a large extent by the efficiency in the neural encoding of sound onsets. PMID:27094850

  17. Prospective evaluation of free-breathing diffusion-weighted imaging for the detection of inflammatory bowel disease with MR enterography in childhood population

    PubMed Central

    Dubron, Céline; Avni, Freddy; Boutry, Nathalie; Turck, Dominique; Duhamel, Alain

    2016-01-01

    Objective: To evaluate prospectively the performance of diffusion-weighted imaging (DWI) for the detection of active lesions on MR enterography (MRE) in children with inflammatory bowel disease (IBD). Methods: MRE of 48 children (mean age 13 years) with suspected or known IBD were blindly analysed by 2 independent readers for the presence of active lesions. Two sets of imaging including DWI and gadolinium-enhanced imaging (GEI) were reviewed. A reader consensus was obtained. The gold standard was histopathological findings. In patient-level analysis and segment-level analysis, sensitivity and specificity were calculated for DWI and GEI and compared using McNemar's test or logistic random-effects models. Results: At least 1 active lesion was confirmed in 42 (87.5%) children. Sensitivity and specificity for the detection of at least one lesion were 88.1% (95% CI, 74.3–96.1) and 83.3% (95% CI, 35.9–99.6), respectively, for DWI and 66.7% (95% CI, 50.4–80.4) and 83.3% (95% CI, 35.9–99.6), respectively, for GEI. In segment-level analysis, sensitivity and specificity for the detection of specific segment lesions were 62.5% (95% CI, 48.1–75) and 97.1% (95% CI, 93.5–98.7), respectively, for DWI and 45.7% (95% CI, 30.8–61.3) and 98.2% (95% CI, 95.3–99.4), respectively, for GEI. The sensitivity of DWI was significantly better than that of GEI per patient (p = 0.004) and per segment (p = 0.028). Conclusion: DWI demonstrates better performance than GEI for the detection of active lesions in children with IBD. Advances in knowledge: Examination with no intravenous injection–DWI can replace T1 weighted images when paediatric patients are screened with MRE for IBD. Examination performed in free breathing is better tolerated by children. PMID:26838954

  18. Prospective evaluation of free-breathing diffusion-weighted imaging for the detection of inflammatory bowel disease with MR enterography in childhood population.

    PubMed

    Dubron, Céline; Avni, Freddy; Boutry, Nathalie; Turck, Dominique; Duhamel, Alain; Amzallag-Bellenger, Elisa

    2016-01-01

    To evaluate prospectively the performance of diffusion-weighted imaging (DWI) for the detection of active lesions on MR enterography (MRE) in children with inflammatory bowel disease (IBD). MRE of 48 children (mean age 13 years) with suspected or known IBD were blindly analysed by 2 independent readers for the presence of active lesions. Two sets of imaging including DWI and gadolinium-enhanced imaging (GEI) were reviewed. A reader consensus was obtained. The gold standard was histopathological findings. In patient-level analysis and segment-level analysis, sensitivity and specificity were calculated for DWI and GEI and compared using McNemar's test or logistic random-effects models. At least 1 active lesion was confirmed in 42 (87.5%) children. Sensitivity and specificity for the detection of at least one lesion were 88.1% (95% CI, 74.3-96.1) and 83.3% (95% CI, 35.9-99.6), respectively, for DWI and 66.7% (95% CI, 50.4-80.4) and 83.3% (95% CI, 35.9-99.6), respectively, for GEI. In segment-level analysis, sensitivity and specificity for the detection of specific segment lesions were 62.5% (95% CI, 48.1-75) and 97.1% (95% CI, 93.5-98.7), respectively, for DWI and 45.7% (95% CI, 30.8-61.3) and 98.2% (95% CI, 95.3-99.4), respectively, for GEI. The sensitivity of DWI was significantly better than that of GEI per patient (p = 0.004) and per segment (p = 0.028). DWI demonstrates better performance than GEI for the detection of active lesions in children with IBD. Examination with no intravenous injection-DWI can replace T1 weighted images when paediatric patients are screened with MRE for IBD. Examination performed in free breathing is better tolerated by children.

  19. Modeling Nitrogen Dynamics in a Waste Stabilization Pond System Using Flexible Modeling Environment with MCMC.

    PubMed

    Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R

    2017-07-12

    This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.

  20. Rock-dwelling lizards exhibit less sensitivity of sprint speed to increases in substrate rugosity.

    PubMed

    Collins, Clint E; Self, Jessica D; Anderson, Roger A; McBrayer, Lance D

    2013-06-01

    Effectively moving across variable substrates is important to all terrestrial animals. The effects of substrates on lizard performance have ecological ramifications including the partitioning of habitat according to sprinting ability on different surfaces. This phenomenon is known as sprint sensitivity, or the decrease in sprint speed due to change in substrate. However, sprint sensitivity has been characterized only in arboreal Anolis lizards. Our study measured sensitivity to substrate rugosity among six lizard species that occupy rocky, sandy, and/or arboreal habitats. Lizards that use rocky habitats are less sensitive to changes in substrate rugosity, followed by arboreal lizards, and then by lizards that use sandy habitats. We infer from comparative phylogenetic analysis that forelimb, chest, and tail dimensions are important external morphological features related to sensitivity to changes in substrate rugosity. Copyright © 2013 Elsevier GmbH. All rights reserved.

Top