Sample records for sensitivity analysis including

  1. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  2. Sensitivity analysis of the add-on price estimate for the edge-defined film-fed growth process

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.; Kachare, A. H.

    1981-01-01

    The analysis is in terms of cost parameters and production parameters. The cost parameters include equipment, space, direct labor, materials, and utilities. The production parameters include growth rate, process yield, and duty cycle. A computer program was developed specifically to do the sensitivity analysis.

  3. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  4. SEP thrust subsystem performance sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Atkins, K. L.; Sauer, C. G., Jr.; Kerrisk, D. J.

    1973-01-01

    This is a two-part report on solar electric propulsion (SEP) performance sensitivity analysis. The first part describes the preliminary analysis of the SEP thrust system performance for an Encke rendezvous mission. A detailed description of thrust subsystem hardware tolerances on mission performance is included together with nominal spacecraft parameters based on these tolerances. The second part describes the method of analysis and graphical techniques used in generating the data for Part 1. Included is a description of both the trajectory program used and the additional software developed for this analysis. Part 2 also includes a comprehensive description of the use of the graphical techniques employed in this performance analysis.

  5. Benchmark Data Set for Wheat Growth Models: Field Experiments and AgMIP Multi-Model Simulations.

    NASA Technical Reports Server (NTRS)

    Asseng, S.; Ewert, F.; Martre, P.; Rosenzweig, C.; Jones, J. W.; Hatfield, J. L.; Ruane, A. C.; Boote, K. J.; Thorburn, P.J.; Rotter, R. P.

    2015-01-01

    The data set includes a current representative management treatment from detailed, quality-tested sentinel field experiments with wheat from four contrasting environments including Australia, The Netherlands, India and Argentina. Measurements include local daily climate data (solar radiation, maximum and minimum temperature, precipitation, surface wind, dew point temperature, relative humidity, and vapor pressure), soil characteristics, frequent growth, nitrogen in crop and soil, crop and soil water and yield components. Simulations include results from 27 wheat models and a sensitivity analysis with 26 models and 30 years (1981-2010) for each location, for elevated atmospheric CO2 and temperature changes, a heat stress sensitivity analysis at anthesis, and a sensitivity analysis with soil and crop management variations and a Global Climate Model end-century scenario.

  6. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  7. Automatic differentiation for design sensitivity analysis of structural systems using multiple processors

    NASA Technical Reports Server (NTRS)

    Nguyen, Duc T.; Storaasli, Olaf O.; Qin, Jiangning; Qamar, Ramzi

    1994-01-01

    An automatic differentiation tool (ADIFOR) is incorporated into a finite element based structural analysis program for shape and non-shape design sensitivity analysis of structural systems. The entire analysis and sensitivity procedures are parallelized and vectorized for high performance computation. Small scale examples to verify the accuracy of the proposed program and a medium scale example to demonstrate the parallel vector performance on multiple CRAY C90 processors are included.

  8. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications. [computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1992-01-01

    Fundamental equations of aerodynamic sensitivity analysis and approximate analysis for the two dimensional thin layer Navier-Stokes equations are reviewed, and special boundary condition considerations necessary to apply these equations to isolated lifting airfoils on 'C' and 'O' meshes are discussed in detail. An efficient strategy which is based on the finite element method and an elastic membrane representation of the computational domain is successfully tested, which circumvents the costly 'brute force' method of obtaining grid sensitivity derivatives, and is also useful in mesh regeneration. The issue of turbulence modeling is addressed in a preliminary study. Aerodynamic shape sensitivity derivatives are efficiently calculated, and their accuracy is validated on two viscous test problems, including: (1) internal flow through a double throat nozzle, and (2) external flow over a NACA 4-digit airfoil. An automated aerodynamic design optimization strategy is outlined which includes the use of a design optimization program, an aerodynamic flow analysis code, an aerodynamic sensitivity and approximate analysis code, and a mesh regeneration and grid sensitivity analysis code. Application of the optimization methodology to the two test problems in each case resulted in a new design having a significantly improved performance in the aerodynamic response of interest.

  9. A discourse on sensitivity analysis for discretely-modeled structures

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.; Haftka, Raphael T.

    1991-01-01

    A descriptive review is presented of the most recent methods for performing sensitivity analysis of the structural behavior of discretely-modeled systems. The methods are generally but not exclusively aimed at finite element modeled structures. Topics included are: selections of finite difference step sizes; special consideration for finite difference sensitivity of iteratively-solved response problems; first and second derivatives of static structural response; sensitivity of stresses; nonlinear static response sensitivity; eigenvalue and eigenvector sensitivities for both distinct and repeated eigenvalues; and sensitivity of transient response for both linear and nonlinear structural response.

  10. Pressure Sensitive Paints

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Bencic, T.; Sullivan, J. P.

    1999-01-01

    This article reviews new advances and applications of pressure sensitive paints in aerodynamic testing. Emphasis is placed on important technical aspects of pressure sensitive paint including instrumentation, data processing, and uncertainty analysis.

  11. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  12. The Diagnostic Performance of Stool DNA Testing for Colorectal Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Zhai, Rong-Lin; Xu, Fei; Zhang, Pei; Zhang, Wan-Li; Wang, Hui; Wang, Ji-Liang; Cai, Kai-Lin; Long, Yue-Ping; Lu, Xiao-Ming; Tao, Kai-Xiong; Wang, Guo-Bin

    2016-02-01

    This meta-analysis was designed to evaluate the diagnostic performance of stool DNA testing for colorectal cancer (CRC) and compare the performance between single-gene and multiple-gene tests.MEDLINE, Cochrane, EMBASE databases were searched using keywords colorectal cancers, stool/fecal, sensitivity, specificity, DNA, and screening. Sensitivity analysis, quality assessments, and performance bias were performed for the included studies.Fifty-three studies were included in the analysis with a total sample size of 7524 patients. The studies were heterogeneous with regard to the genes being analyzed for fecal genetic biomarkers of CRC, as well as the laboratory methods being used for each assay. The sensitivity of the different assays ranged from 2% to 100% and the specificity ranged from 81% to 100%. The meta-analysis found that the pooled sensitivities for single- and multigene assays were 48.0% and 77.8%, respectively, while the pooled specificities were 97.0% and 92.7%. Receiver operator curves and diagnostic odds ratios showed no significant difference between both tests with regard to sensitivity or specificity.This meta-analysis revealed that using assays that evaluated multiple genes compared with single-gene assays did not increase the sensitivity or specificity of stool DNA testing in detecting CRC.

  13. A new u-statistic with superior design sensitivity in matched observational studies.

    PubMed

    Rosenbaum, Paul R

    2011-09-01

    In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.

  14. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  15. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  16. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  17. [Meta-analysis of diagnostic capability of frequency-doubling technology (FDT) for primary glaucoma].

    PubMed

    Liu, Ting; He, Xiang-ge

    2006-05-01

    To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.

  18. Simulation-based sensitivity analysis for non-ignorably missing data.

    PubMed

    Yin, Peng; Shi, Jian Q

    2017-01-01

    Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.

  19. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE PAGES

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil; ...

    2017-01-24

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  20. Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems With Switching [Discrete Adjoint Sensitivity Analysis of Hybrid Dynamical Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong; Abhyankar, Shrirang; Constantinescu, Emil

    Sensitivity analysis is an important tool for describing power system dynamic behavior in response to parameter variations. It is a central component in preventive and corrective control applications. The existing approaches for sensitivity calculations, namely, finite-difference and forward sensitivity analysis, require a computational effort that increases linearly with the number of sensitivity parameters. In this paper, we investigate, implement, and test a discrete adjoint sensitivity approach whose computational effort is effectively independent of the number of sensitivity parameters. The proposed approach is highly efficient for calculating sensitivities of larger systems and is consistent, within machine precision, with the function whosemore » sensitivity we are seeking. This is an essential feature for use in optimization applications. Moreover, our approach includes a consistent treatment of systems with switching, such as dc exciters, by deriving and implementing the adjoint jump conditions that arise from state-dependent and time-dependent switchings. The accuracy and the computational efficiency of the proposed approach are demonstrated in comparison with the forward sensitivity analysis approach. In conclusion, this paper focuses primarily on the power system dynamics, but the approach is general and can be applied to hybrid dynamical systems in a broader range of fields.« less

  1. Application of a sensitivity analysis technique to high-order digital flight control systems

    NASA Technical Reports Server (NTRS)

    Paduano, James D.; Downing, David R.

    1987-01-01

    A sensitivity analysis technique for multiloop flight control systems is studied. This technique uses the scaled singular values of the return difference matrix as a measure of the relative stability of a control system. It then uses the gradients of these singular values with respect to system and controller parameters to judge sensitivity. The sensitivity analysis technique is first reviewed; then it is extended to include digital systems, through the derivation of singular-value gradient equations. Gradients with respect to parameters which do not appear explicitly as control-system matrix elements are also derived, so that high-order systems can be studied. A complete review of the integrated technique is given by way of a simple example: the inverted pendulum problem. The technique is then demonstrated on the X-29 control laws. Results show linear models of real systems can be analyzed by this sensitivity technique, if it is applied with care. A computer program called SVA was written to accomplish the singular-value sensitivity analysis techniques. Thus computational methods and considerations form an integral part of many of the discussions. A user's guide to the program is included. The SVA is a fully public domain program, running on the NASA/Dryden Elxsi computer.

  2. Global Sensitivity and Data-Worth Analyses in iTOUGH2: User's Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wainwright, Haruko Murakami; Finsterle, Stefan

    2016-07-15

    This manual explains the use of local sensitivity analysis, the global Morris OAT and Sobol’ methods, and a related data-worth analysis as implemented in iTOUGH2. In addition to input specification and output formats, it includes some examples to show how to interpret results.

  3. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    NASA Technical Reports Server (NTRS)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  4. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  5. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  6. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  7. Glaucoma progression detection: agreement, sensitivity, and specificity of expert visual field evaluation, event analysis, and trend analysis.

    PubMed

    Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier

    2013-01-01

    To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.

  8. Ethical sensitivity in professional practice: concept analysis.

    PubMed

    Weaver, Kathryn; Morse, Janice; Mitcham, Carl

    2008-06-01

    This paper is a report of a concept analysis of ethical sensitivity. Ethical sensitivity enables nurses and other professionals to respond morally to the suffering and vulnerability of those receiving professional care and services. Because of its significance to nursing and other professional practices, ethical sensitivity deserves more focused analysis. A criteria-based method oriented toward pragmatic utility guided the analysis of 200 papers and books from the fields of nursing, medicine, psychology, dentistry, clinical ethics, theology, education, law, accounting or business, journalism, philosophy, political and social sciences and women's studies. This literature spanned 1970 to 2006 and was sorted by discipline and concept dimensions and examined for concept structure and use across various contexts. The analysis was completed in September 2007. Ethical sensitivity in professional practice develops in contexts of uncertainty, client suffering and vulnerability, and through relationships characterized by receptivity, responsiveness and courage on the part of professionals. Essential attributes of ethical sensitivity are identified as moral perception, affectivity and dividing loyalties. Outcomes include integrity preserving decision-making, comfort and well-being, learning and professional transcendence. Our findings promote ethical sensitivity as a type of practical wisdom that pursues client comfort and professional satisfaction with care delivery. The analysis and resulting model offers an inclusive view of ethical sensitivity that addresses some of the limitations with prior conceptualizations.

  9. CO2 Push-Pull Dual (Conjugate) Faults Injection Simulations

    DOE Data Explorer

    Oldenburg, Curtis (ORCID:0000000201326016); Lee, Kyung Jae; Doughty, Christine; Jung, Yoojin; Borgia, Andrea; Pan, Lehua; Zhang, Rui; Daley, Thomas M.; Altundas, Bilgin; Chugunov, Nikita

    2017-07-20

    This submission contains datasets and a final manuscript associated with a project simulating carbon dioxide push-pull into a conjugate fault system modeled after Dixie Valley- sensitivity analysis of significant parameters and uncertainty prediction by data-worth analysis. Datasets include: (1) Forward simulation runs of standard cases (push & pull phases), (2) Local sensitivity analyses (push & pull phases), and (3) Data-worth analysis (push & pull phases).

  10. Cardiothoracic ratio for prediction of left ventricular dilation: a systematic review and pooled analysis.

    PubMed

    Loomba, Rohit S; Shah, Parinda H; Nijhawan, Karan; Aggarwal, Saurabh; Arora, Rohit

    2015-03-01

    Increased cardiothoracic ratio noted on chest radiographs often prompts concern and further evaluation with additional imaging. This study pools available data assessing the utility of cardiothoracic ratio in predicting left ventricular dilation. A systematic review of the literature was conducted to identify studies comparing cardiothoracic ratio by chest x-ray to left ventricular dilation by echocardiography. Electronic databases were used to identify studies which were then assessed for quality and bias, with those with adequate quality and minimal bias ultimately being included in the pooled analysis. The pooled data were used to determine the sensitivity, specificity, positive predictive value and negative predictive value of cardiomegaly in predicting left ventricular dilation. A total of six studies consisting of 466 patients were included in this analysis. Cardiothoracic ratio had 83.3% sensitivity, 45.4% specificity, 43.5% positive predictive value and 82.7% negative predictive value. When a secondary analysis was conducted with a pediatric study excluded, a total of five studies consisting of 371 patients were included. Cardiothoracic ratio had 86.2% sensitivity, 25.2% specificity, 42.5% positive predictive value and 74.0% negative predictive value. Cardiothoracic ratio as determined by chest radiograph is sensitive but not specific for identifying left ventricular dilation. Cardiothoracic ratio also has a strong negative predictive value for identifying left ventricular dilation.

  11. Local sensitivity analysis for inverse problems solved by singular value decomposition

    USGS Publications Warehouse

    Hill, M.C.; Nolan, B.T.

    2010-01-01

    Local sensitivity analysis provides computationally frugal ways to evaluate models commonly used for resource management, risk assessment, and so on. This includes diagnosing inverse model convergence problems caused by parameter insensitivity and(or) parameter interdependence (correlation), understanding what aspects of the model and data contribute to measures of uncertainty, and identifying new data likely to reduce model uncertainty. Here, we consider sensitivity statistics relevant to models in which the process model parameters are transformed using singular value decomposition (SVD) to create SVD parameters for model calibration. The statistics considered include the PEST identifiability statistic, and combined use of the process-model parameter statistics composite scaled sensitivities and parameter correlation coefficients (CSS and PCC). The statistics are complimentary in that the identifiability statistic integrates the effects of parameter sensitivity and interdependence, while CSS and PCC provide individual measures of sensitivity and interdependence. PCC quantifies correlations between pairs or larger sets of parameters; when a set of parameters is intercorrelated, the absolute value of PCC is close to 1.00 for all pairs in the set. The number of singular vectors to include in the calculation of the identifiability statistic is somewhat subjective and influences the statistic. To demonstrate the statistics, we use the USDA’s Root Zone Water Quality Model to simulate nitrogen fate and transport in the unsaturated zone of the Merced River Basin, CA. There are 16 log-transformed process-model parameters, including water content at field capacity (WFC) and bulk density (BD) for each of five soil layers. Calibration data consisted of 1,670 observations comprising soil moisture, soil water tension, aqueous nitrate and bromide concentrations, soil nitrate concentration, and organic matter content. All 16 of the SVD parameters could be estimated by regression based on the range of singular values. Identifiability statistic results varied based on the number of SVD parameters included. Identifiability statistics calculated for four SVD parameters indicate the same three most important process-model parameters as CSS/PCC (WFC1, WFC2, and BD2), but the order differed. Additionally, the identifiability statistic showed that BD1 was almost as dominant as WFC1. The CSS/PCC analysis showed that this results from its high correlation with WCF1 (-0.94), and not its individual sensitivity. Such distinctions, combined with analysis of how high correlations and(or) sensitivities result from the constructed model, can produce important insights into, for example, the use of sensitivity analysis to design monitoring networks. In conclusion, the statistics considered identified similar important parameters. They differ because (1) with CSS/PCC can be more awkward because sensitivity and interdependence are considered separately and (2) identifiability requires consideration of how many SVD parameters to include. A continuing challenge is to understand how these computationally efficient methods compare with computationally demanding global methods like Markov-Chain Monte Carlo given common nonlinear processes and the often even more nonlinear models.

  12. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  13. Hyperspectral data analysis procedures with reduced sensitivity to noise

    NASA Technical Reports Server (NTRS)

    Landgrebe, David A.

    1993-01-01

    Multispectral sensor systems have become steadily improved over the years in their ability to deliver increased spectral detail. With the advent of hyperspectral sensors, including imaging spectrometers, this technology is in the process of taking a large leap forward, thus providing the possibility of enabling delivery of much more detailed information. However, this direction of development has drawn even more attention to the matter of noise and other deleterious effects in the data, because reducing the fundamental limitations of spectral detail on information collection raises the limitations presented by noise to even greater importance. Much current effort in remote sensing research is thus being devoted to adjusting the data to mitigate the effects of noise and other deleterious effects. A parallel approach to the problem is to look for analysis approaches and procedures which have reduced sensitivity to such effects. We discuss some of the fundamental principles which define analysis algorithm characteristics providing such reduced sensitivity. One such analysis procedure including an example analysis of a data set is described, illustrating this effect.

  14. Sensitivity-Uncertainty Techniques for Nuclear Criticality Safety

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-08-07

    The sensitivity and uncertainty analysis course will introduce students to k eff sensitivity data, cross-section uncertainty data, how k eff sensitivity data and k eff uncertainty data are generated and how they can be used. Discussion will include how sensitivity/uncertainty data can be used to select applicable critical experiments, to quantify a defensible margin to cover validation gaps and weaknesses, and in development of upper subcritical limits.

  15. On the sensitivity analysis of porous material models

    NASA Astrophysics Data System (ADS)

    Ouisse, Morvan; Ichchou, Mohamed; Chedly, Slaheddine; Collet, Manuel

    2012-11-01

    Porous materials are used in many vibroacoustic applications. Different available models describe their behaviors according to materials' intrinsic characteristics. For instance, in the case of porous material with rigid frame, and according to the Champoux-Allard model, five parameters are employed. In this paper, an investigation about this model sensitivity to parameters according to frequency is conducted. Sobol and FAST algorithms are used for sensitivity analysis. A strong parametric frequency dependent hierarchy is shown. Sensitivity investigations confirm that resistivity is the most influent parameter when acoustic absorption and surface impedance of porous materials with rigid frame are considered. The analysis is first performed on a wide category of porous materials, and then restricted to a polyurethane foam analysis in order to illustrate the impact of the reduction of the design space. In a second part, a sensitivity analysis is performed using the Biot-Allard model with nine parameters including mechanical effects of the frame and conclusions are drawn through numerical simulations.

  16. Diagnostic value of 18F-FDG-PET/CT for the evaluation of solitary pulmonary nodules: a systematic review and meta-analysis.

    PubMed

    Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian

    2017-01-01

    To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.

  17. Anxiety Sensitivity and the Anxiety Disorders: A Meta-Analytic Review and Synthesis

    ERIC Educational Resources Information Center

    Olatunji, Bunmi O.; Wolitzky-Taylor, Kate B.

    2009-01-01

    There has been significant interest in the role of anxiety sensitivity (AS) in the anxiety disorders. In this meta-analysis, we empirically evaluate differences in AS between anxiety disorders, mood disorders, and nonclinical controls. A total of 38 published studies (N = 20,146) were included in the analysis. The results yielded a large effect…

  18. An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1991-01-01

    The three dimensional quasi-analytical sensitivity analysis and the ancillary driver programs are developed needed to carry out the studies and perform comparisons. The code is essentially contained in one unified package which includes the following: (1) a three dimensional transonic wing analysis program (ZEBRA); (2) a quasi-analytical portion which determines the matrix elements in the quasi-analytical equations; (3) a method for computing the sensitivity coefficients from the resulting quasi-analytical equations; (4) a package to determine for comparison purposes sensitivity coefficients via the finite difference approach; and (5) a graphics package.

  19. Results of an integrated structure/control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1989-01-01

    A design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations is discussed. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changes in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient than finite difference methods for the computation of the equivalent sensitivity information.

  20. On the Exploitation of Sensitivity Derivatives for Improving Sampling Methods

    NASA Technical Reports Server (NTRS)

    Cao, Yanzhao; Hussaini, M. Yousuff; Zang, Thomas A.

    2003-01-01

    Many application codes, such as finite-element structural analyses and computational fluid dynamics codes, are capable of producing many sensitivity derivatives at a small fraction of the cost of the underlying analysis. This paper describes a simple variance reduction method that exploits such inexpensive sensitivity derivatives to increase the accuracy of sampling methods. Three examples, including a finite-element structural analysis of an aircraft wing, are provided that illustrate an order of magnitude improvement in accuracy for both Monte Carlo and stratified sampling schemes.

  1. Prediction of sensitivity to warfarin based on VKORC1 and CYP2C9 polymorphisms in patients from different places in Colombia.

    PubMed

    Cifuentes, Ricardo A; Murillo-Rojas, Juan; Avella-Vargas, Esperanza

    2016-03-03

    In the search to prevent hemorrhages associated with anticoagulant therapy, a major goal is to validate predictors of sensitivity to warfarin. However, previous studies in Colombia that included polymorphisms in the VKORC1 and CYP2C9 genes as predictors reported different algorithm performances to explain dose variations, and did not evaluate the prediction of sensitivity to warfarin.  To determine the accuracy of the pharmacogenetic analysis, which includes the CYP2C9 *2 and *3 and VKORC1 1639G>A polymorphisms in predicting patients' sensitivity to warfarin at the Hospital Militar Central, a reference center for patients born in different parts of Colombia.  Demographic and clinical data were obtained from 130 patients with stable doses of warfarin for more than two months. Next, their genotypes were obtained through a melting curve analysis. After verifying the Hardy-Weinberg equilibrium of the genotypes from the polymorphisms, a statistical analysis was done, which included multivariate and predictive approaches.  A pharmacogenetic model that explained 52.8% of dose variation (p<0.001) was built, which was only 4% above the performance resulting from the same data using the International Warfarin Pharmacogenetics Consortium algorithm. The model predicting the sensitivity achieved an accuracy of 77.8% and included age (p=0.003), polymorphisms *2 and *3 (p=0.002) and polymorphism 1639G>A (p<0.001) as predictors.  These results in a mixed population support the prediction of sensitivity to warfarin based on polymorphisms in VKORC1 and CYP2C9 as a valid approach in Colombian patients.

  2. Preventive behaviors by the level of perceived infection sensitivity during the Korea outbreak of Middle East Respiratory Syndrome in 2015.

    PubMed

    Lee, Soon Young; Yang, Hee Jeong; Kim, Gawon; Cheong, Hae-Kwan; Choi, Bo Youl

    2016-01-01

    This study was performed to investigate the relationship between community residents' infection sensitivity and their levels of preventive behaviors during the 2015 Middle East Respiratory Syndrome (MERS) outbreak in Korea. Seven thousands two hundreds eighty one participants from nine areas in Gyeonggi-do including Pyeongtaek, the origin of the outbreak in 2015 agreed to participate in the survey and the data from 6,739 participants were included in the final analysis. The data on the perceived infection sensitivity were subjected to cluster analysis. The levels of stress, reliability/practice of preventive behaviors, hand washing practice and policy credibility during the outbreak period were analyzed for each cluster. Cluster analysis of infection sensitivity due to the MERS outbreak resulted in classification of participants into four groups: the non-sensitive group (14.5%), social concern group (17.4%), neutral group (29.1%), and overall sensitive group (39.0%). A logistic regression analysis found that the overall sensitive group with high sensitivity had higher stress levels (17.80; 95% confidence interval [CI], 13.77 to 23.00), higher reliability on preventive behaviors (5.81; 95% CI, 4.84 to 6.98), higher practice of preventive behaviors (4.53; 95% CI, 3.83 to 5.37) and higher practice of hand washing (2.71; 95% CI, 2.13 to 3.43) during the outbreak period, compared to the non-sensitive group. Infection sensitivity of community residents during the MERS outbreak correlated with gender, age, occupation, and health behaviors. When there is an outbreak in the community, there is need to maintain a certain level of sensitivity while reducing excessive stress, as well as promote the practice of preventive behaviors among local residents. In particular, target groups need to be notified and policies need to be established with a consideration of the socio-demographic characteristics of the community.

  3. Sensitivity Analysis for some Water Pollution Problem

    NASA Astrophysics Data System (ADS)

    Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff

    2014-05-01

    Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .

  4. MOVES2010a regional level sensitivity analysis

    DOT National Transportation Integrated Search

    2012-12-10

    This document discusses the sensitivity of various input parameter effects on emission rates using the US Environmental Protection Agencys (EPAs) MOVES2010a model at the regional level. Pollutants included in the study are carbon monoxide (CO),...

  5. Cyst fluid analysis in the differential diagnosis of pancreatic cystic lesions: a pooled analysis.

    PubMed

    van der Waaij, Laurens A; van Dullemen, Hendrik M; Porte, Robert J

    2005-09-01

    Pancreatic cystic tumors commonly include serous cystadenoma (SCA), mucinous cystadenoma (MCA), and mucinous cystadenocarcinoma (MCAC). A differential diagnosis with pseudocysts (PC) can be difficult. Radiologic criteria are not reliable. The objective of the study is to investigate the value of cyst fluid analysis in the differential diagnosis of benign (SCA, PC) vs. premalignant or malignant (MCA, MCAC) lesions. A search in PubMed was performed with the search terms cyst, pancrea, and fluid. Articles about cyst fluid analysis of pancreatic lesions that contained the individual data of at least 7 patients were included in the study. Data of all individual patients were combined and were plotted in scatter grams. Cutoff levels were determined. Twelve studies were included, which comprised data of 450 patients. Cysts with an amylase concentration <250 U/L were SCA, MCA, or MCAC (sensitivity 44%, specificity 98%) and, thus, virtually excluded PC. A carcinoembryonic antigen (CEA) <5 ng/mL suggested a SCA or PC (sensitivity 50%, specificity 95%). A CEA >800 ng/mL strongly suggested MCA or MCAC (sensitivity 48%, specificity 98%). A carbohydrate-associated antigen (CA) 19-9 <37 U/mL strongly suggested PC or SCA (sensitivity 19%, specificity 98%). Cytologic examination revealed malignant cells in 48% of MCAC (n = 111). Most pancreatic cystic tumors should be resected without the need for cyst fluid analysis. However, in asymptomatic patients, in patients with an increased surgical risk, and, in patients in whom there is a diagnostic uncertainty about the presence of a PC, cyst fluid analysis helps to determine the optimal therapeutic strategy.

  6. Benefit-Cost Analysis of Integrated Paratransit Systems : Volume 6. Technical Appendices.

    DOT National Transportation Integrated Search

    1979-09-01

    This last volume, includes five technical appendices which document the methodologies used in the benefit-cost analysis. They are the following: Scenario analysis methodology; Impact estimation; Example of impact estimation; Sensitivity analysis; Agg...

  7. Two-step sensitivity testing of parametrized and regionalized life cycle assessments: methodology and case study.

    PubMed

    Mutel, Christopher L; de Baan, Laura; Hellweg, Stefanie

    2013-06-04

    Comprehensive sensitivity analysis is a significant tool to interpret and improve life cycle assessment (LCA) models, but is rarely performed. Sensitivity analysis will increase in importance as inventory databases become regionalized, increasing the number of system parameters, and parametrized, adding complexity through variables and nonlinear formulas. We propose and implement a new two-step approach to sensitivity analysis. First, we identify parameters with high global sensitivities for further examination and analysis with a screening step, the method of elementary effects. Second, the more computationally intensive contribution to variance test is used to quantify the relative importance of these parameters. The two-step sensitivity test is illustrated on a regionalized, nonlinear case study of the biodiversity impacts from land use of cocoa production, including a worldwide cocoa products trade model. Our simplified trade model can be used for transformable commodities where one is assessing market shares that vary over time. In the case study, the highly uncertain characterization factors for the Ivory Coast and Ghana contributed more than 50% of variance for almost all countries and years examined. The two-step sensitivity test allows for the interpretation, understanding, and improvement of large, complex, and nonlinear LCA systems.

  8. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process.

    PubMed

    Deng, Bo; Shi, Yaoyao; Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-31

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing.

  9. Multi-Response Parameter Interval Sensitivity and Optimization for the Composite Tape Winding Process

    PubMed Central

    Yu, Tao; Kang, Chao; Zhao, Pan

    2018-01-01

    The composite tape winding process, which utilizes a tape winding machine and prepreg tapes, provides a promising way to improve the quality of composite products. Nevertheless, the process parameters of composite tape winding have crucial effects on the tensile strength and void content, which are closely related to the performances of the winding products. In this article, two different object values of winding products, including mechanical performance (tensile strength) and a physical property (void content), were respectively calculated. Thereafter, the paper presents an integrated methodology by combining multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis to obtain the optimal intervals of the composite tape winding process. First, the global multi-parameter sensitivity analysis method was applied to investigate the sensitivity of each parameter in the tape winding processing. Then, the local single-parameter sensitivity analysis method was employed to calculate the sensitivity of a single parameter within the corresponding range. Finally, the stability and instability ranges of each parameter were distinguished. Meanwhile, the authors optimized the process parameter ranges and provided comprehensive optimized intervals of the winding parameters. The verification test validated that the optimized intervals of the process parameters were reliable and stable for winding products manufacturing. PMID:29385048

  10. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  11. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 3: Illustrative test problems

    NASA Technical Reports Server (NTRS)

    Bittker, David A.; Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 3 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 3 explains the kinetics and kinetics-plus-sensitivity analysis problems supplied with LSENS and presents sample results. These problems illustrate the various capabilities of, and reaction models that can be solved by, the code and may provide a convenient starting point for the user to construct the problem data file required to execute LSENS. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  12. Prevalence of potent skin sensitizers in oxidative hair dye products in Korea.

    PubMed

    Kim, Hyunji; Kim, Kisok

    2016-09-01

    The objective of the present study was to elucidate the prevalence of potent skin sensitizers in oxidative hair dye products manufactured by Korean domestic companies. A database on hair dye products made by domestic companies and selling in the Korean market in 2013 was used to obtain information on company name, brand name, quantity of production, and ingredients. The prevalence of substances categorized as potent skin sensitizers was calculated using the hair dye ingredient database, and the pattern of concomitant presence of hair dye ingredients was analyzed using network analysis software. A total of 19 potent skin sensitizers were identified from a database that included 99 hair dye products manufactured by Korean domestic companies. Among 19 potent skin sensitizers, the four most frequent were resorcinol, m-aminophenol, p-phenylenediamine (PPD), and p-aminophenol; these four skin-sensitizing ingredients were found in more than 50% of the products studied. Network analysis showed that resorcinol, m-aminophenol, and PPD existed together in many hair dye products. In 99 products examined, the average product contained 4.4 potent sensitizers, and 82% of the products contained four or more skin sensitizers. The present results demonstrate that oxidative hair dye products made by Korean domestic manufacturers contain various numbers and types of potent skin sensitizers. Furthermore, these results suggest that some hair dye products should be used with caution to prevent adverse effects on the skin, including allergic contact dermatitis.

  13. Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu

    2014-03-28

    In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less

  14. Structural optimization: Status and promise

    NASA Astrophysics Data System (ADS)

    Kamat, Manohar P.

    Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)

  15. Fish oil supplementation and insulin sensitivity: a systematic review and meta-analysis.

    PubMed

    Gao, Huanqing; Geng, Tingting; Huang, Tao; Zhao, Qinghua

    2017-07-03

    Fish oil supplementation has been shown to be associated with a lower risk of metabolic syndrome and benefit a wide range of chronic diseases, such as cardiovascular disease, type 2 diabetes and several types of cancers. However, the evidence of fish oil supplementation on glucose metabolism and insulin sensitivity is still controversial. This meta-analysis summarized the exist evidence of the relationship between fish oil supplementation and insulin sensitivity and aimed to evaluate whether fish oil supplementation could improve insulin sensitivity. We searched the Cochrane Library, PubMed, Embase database for the relevant studies update to Dec 2016. Two researchers screened the literature independently by the selection and exclusion criteria. Studies were pooled using random effect models to estimate a pooled SMD and corresponding 95% CI. This meta-analysis was performed by Stata 13.1 software. A total of 17 studies with 672 participants were included in this meta-analysis study after screening from 498 published articles found after the initial search. In a pooled analysis, fish oil supplementation had no effects on insulin sensitivity compared with the placebo (SMD 0.17, 95%CI -0.15 to 0.48, p = 0.292). In subgroup analysis, fish oil supplementation could benefit insulin sensitivity among people who were experiencing at least one symptom of metabolic disorders (SMD 0.53, 95% CI 0.17 to 0.88, p < 0.001). Similarly, there were no significant differences between subgroups of methods of insulin sensitivity, doses of omega-3 polyunsaturated fatty acids (n-3 PUFA) of fish oil supplementation or duration of the intervention. The sensitivity analysis indicated that the results were robust. Short-term fish oil supplementation is associated with increasing the insulin sensitivity among those people with metabolic disorders.

  16. VARS-TOOL: A Comprehensive, Efficient, and Robust Sensitivity Analysis Toolbox

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Sheikholeslami, R.; Haghnegahdar, A.; Esfahbod, B.

    2016-12-01

    VARS-TOOL is an advanced sensitivity and uncertainty analysis toolbox, applicable to the full range of computer simulation models, including Earth and Environmental Systems Models (EESMs). The toolbox was developed originally around VARS (Variogram Analysis of Response Surfaces), which is a general framework for Global Sensitivity Analysis (GSA) that utilizes the variogram/covariogram concept to characterize the full spectrum of sensitivity-related information, thereby providing a comprehensive set of "global" sensitivity metrics with minimal computational cost. VARS-TOOL is unique in that, with a single sample set (set of simulation model runs), it generates simultaneously three philosophically different families of global sensitivity metrics, including (1) variogram-based metrics called IVARS (Integrated Variogram Across a Range of Scales - VARS approach), (2) variance-based total-order effects (Sobol approach), and (3) derivative-based elementary effects (Morris approach). VARS-TOOL is also enabled with two novel features; the first one being a sequential sampling algorithm, called Progressive Latin Hypercube Sampling (PLHS), which allows progressively increasing the sample size for GSA while maintaining the required sample distributional properties. The second feature is a "grouping strategy" that adaptively groups the model parameters based on their sensitivity or functioning to maximize the reliability of GSA results. These features in conjunction with bootstrapping enable the user to monitor the stability, robustness, and convergence of GSA with the increase in sample size for any given case study. VARS-TOOL has been shown to achieve robust and stable results within 1-2 orders of magnitude smaller sample sizes (fewer model runs) than alternative tools. VARS-TOOL, available in MATLAB and Python, is under continuous development and new capabilities and features are forthcoming.

  17. What Constitutes a "Good" Sensitivity Analysis? Elements and Tools for a Robust Sensitivity Analysis with Reduced Computational Cost

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin; Haghnegahdar, Amin

    2016-04-01

    Global sensitivity analysis (GSA) is a systems theoretic approach to characterizing the overall (average) sensitivity of one or more model responses across the factor space, by attributing the variability of those responses to different controlling (but uncertain) factors (e.g., model parameters, forcings, and boundary and initial conditions). GSA can be very helpful to improve the credibility and utility of Earth and Environmental System Models (EESMs), as these models are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. However, conventional approaches to GSA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we identify several important sensitivity-related characteristics of response surfaces that must be considered when investigating and interpreting the ''global sensitivity'' of a model response (e.g., a metric of model performance) to its parameters/factors. Accordingly, we present a new and general sensitivity and uncertainty analysis framework, Variogram Analysis of Response Surfaces (VARS), based on an analogy to 'variogram analysis', that characterizes a comprehensive spectrum of information on sensitivity. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are special cases of VARS, and that their SA indices are contained within the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  18. LSENS, A General Chemical Kinetics and Sensitivity Analysis Code for Homogeneous Gas-Phase Reactions. Part 2; Code Description and Usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part II of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part II describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part I (NASA RP-1328) derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved by LSENS. Part III (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  19. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  20. Phenotyping asthma, rhinitis and eczema in MeDALL population-based birth cohorts: an allergic comorbidity cluster.

    PubMed

    Garcia-Aymerich, J; Benet, M; Saeys, Y; Pinart, M; Basagaña, X; Smit, H A; Siroux, V; Just, J; Momas, I; Rancière, F; Keil, T; Hohmann, C; Lau, S; Wahn, U; Heinrich, J; Tischer, C G; Fantini, M P; Lenzi, J; Porta, D; Koppelman, G H; Postma, D S; Berdel, D; Koletzko, S; Kerkhof, M; Gehring, U; Wickman, M; Melén, E; Hallberg, J; Bindslev-Jensen, C; Eller, E; Kull, I; Lødrup Carlsen, K C; Carlsen, K-H; Lambrecht, B N; Kogevinas, M; Sunyer, J; Kauffmann, F; Bousquet, J; Antó, J M

    2015-08-01

    Asthma, rhinitis and eczema often co-occur in children, but their interrelationships at the population level have been poorly addressed. We assessed co-occurrence of childhood asthma, rhinitis and eczema using unsupervised statistical techniques. We included 17 209 children at 4 years and 14 585 at 8 years from seven European population-based birth cohorts (MeDALL project). At each age period, children were grouped, using partitioning cluster analysis, according to the distribution of 23 variables covering symptoms 'ever' and 'in the last 12 months', doctor diagnosis, age of onset and treatments of asthma, rhinitis and eczema; immunoglobulin E sensitization; weight; and height. We tested the sensitivity of our estimates to subject and variable selections, and to different statistical approaches, including latent class analysis and self-organizing maps. Two groups were identified as the optimal way to cluster the data at both age periods and in all sensitivity analyses. The first (reference) group at 4 and 8 years (including 70% and 79% of children, respectively) was characterized by a low prevalence of symptoms and sensitization, whereas the second (symptomatic) group exhibited more frequent symptoms and sensitization. Ninety-nine percentage of children with comorbidities (co-occurrence of asthma, rhinitis and/or eczema) were included in the symptomatic group at both ages. The children's characteristics in both groups were consistent in all sensitivity analyses. At 4 and 8 years, at the population level, asthma, rhinitis and eczema can be classified together as an allergic comorbidity cluster. Future research including time-repeated assessments and biological data will help understanding the interrelationships between these diseases. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Computer program for analysis of imperfection sensitivity of ring stiffened shells of revolution

    NASA Technical Reports Server (NTRS)

    Cohen, G. A.

    1971-01-01

    A FORTRAN 4 digital computer program is presented for the initial postbuckling and imperfection sensitivity analysis of bifurcation buckling modes for ring-stiffened orthotropic multilayered shells of revolution. The boundary value problem for the second-order contribution to the buckled state was solved by the forward integration technique using the Runge-Kutta method. The effects of nonlinear prebuckling states and live pressure loadings are included.

  2. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  3. The Efficacy of Guanxinning Injection in Treating Angina Pectoris: Systematic Review and Meta-Analysis of Randomized Controlled Trials

    PubMed Central

    Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha

    2013-01-01

    Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167

  4. Advances in ultrasensitive mass spectrometry of organic molecules.

    PubMed

    Kandiah, Mathivathani; Urban, Pawel L

    2013-06-21

    Ultrasensitive mass spectrometric analysis of organic molecules is important for various branches of chemistry, and other fields including physics, earth and environmental sciences, archaeology, biomedicine, and materials science. It finds applications--as an enabling tool--in systems biology, biological imaging, clinical analysis, and forensics. Although there are a number of technical obstacles associated with the analysis of samples by mass spectrometry at ultratrace level (for example analyte losses during sample preparation, insufficient sensitivity, ion suppression), several noteworthy developments have been made over the years. They include: sensitive ion sources, loss-free interfaces, ion optics components, efficient mass analyzers and detectors, as well as "smart" sample preparation strategies. Some of the mass spectrometric methods published to date can achieve sensitivity which is by several orders of magnitude higher than that of alternative approaches. Femto- and attomole level limits of detection are nowadays common, while zepto- and yoctomole level limits of detection have also been reported. We envision that the ultrasensitive mass spectrometric assays will soon contribute to new discoveries in bioscience and other areas.

  5. A meta-analysis of confocal laser endomicroscopy for the detection of neoplasia in patients with Barrett's esophagus.

    PubMed

    Xiong, Yi-Quan; Ma, Shu-Juan; Zhou, Jun-Hua; Zhong, Xue-Shan; Chen, Qing

    2016-06-01

    Barrett's esophagus (BE) is considered the most important risk factor for development of esophageal adenocarcinoma. Confocal laser endomicroscopy (CLE) is a recently developed technique used to diagnose neoplasia in BE. This meta-analysis was performed to assess the accuracy of CLE for diagnosis of neoplasia in BE. We searched EMBASE, PubMed, Cochrane Library, and Web of Science to identify relevant studies for all articles published up to June 27, 2015 in English. The quality of included studies was assessed using QUADAS-2. Per-patient and per-lesion pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with 95% confidence intervals (CIs) were calculated. In total, 14 studies were included in the final analysis, covering 789 patients with 4047 lesions. Seven studies were included in the per-patient analysis. Pooled sensitivity and specificity were 89% (95% CI: 0.82-0.94) and 83% (95% CI: 0.78-0.86), respectively. Ten studies were included in the per-lesion analysis. Compared with the PP analysis, the corresponding pooled sensitivity declined to 77% (95% CI: 0.73-0.81) and specificity increased to 89% (95% CI: 0.87-0.90). Subgroup analysis showed that probe-based CLE (pCLE) was superior to endoscope-based CLE (eCLE) in pooled specificity [91.4% (95% CI: 89.7-92.9) vs 86.1% (95% CI: 84.3-87.8)] and AUC for the sROC (0.885 vs 0.762). Confocal laser endomicroscopy is a valid method to accurately differentiate neoplasms from non-neoplasms in BE. It can be applied to BE surveillance and early diagnosis of esophageal adenocarcinoma. © 2015 Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

  6. Extending 'Deep Blue' aerosol retrieval coverage to cases of absorbing aerosols above clouds: sensitivity analysis and first case studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sayer, Andrew M.; Hsu, C.; Bettenhausen, Corey

    Cases of absorbing aerosols above clouds (AAC), such as smoke or mineral dust, are omitted from most routinely-processed space-based aerosol optical depth (AOD) data products, including those from the Moderate Resolution Imaging Spectroradiometer (MODIS). This study presents a sensitivity analysis and preliminary algorithm to retrieve above-cloud AOD and liquid cloud optical depth (COD) for AAC cases from MODIS or similar

  7. Imaging modalities for characterising focal pancreatic lesions.

    PubMed

    Best, Lawrence Mj; Rawji, Vishal; Pereira, Stephen P; Davidson, Brian R; Gurusamy, Kurinchi Selvan

    2017-04-17

    Increasing numbers of incidental pancreatic lesions are being detected each year. Accurate characterisation of pancreatic lesions into benign, precancerous, and cancer masses is crucial in deciding whether to use treatment or surveillance. Distinguishing benign lesions from precancerous and cancerous lesions can prevent patients from undergoing unnecessary major surgery. Despite the importance of accurately classifying pancreatic lesions, there is no clear algorithm for management of focal pancreatic lesions. To determine and compare the diagnostic accuracy of various imaging modalities in detecting cancerous and precancerous lesions in people with focal pancreatic lesions. We searched the CENTRAL, MEDLINE, Embase, and Science Citation Index until 19 July 2016. We searched the references of included studies to identify further studies. We did not restrict studies based on language or publication status, or whether data were collected prospectively or retrospectively. We planned to include studies reporting cross-sectional information on the index test (CT (computed tomography), MRI (magnetic resonance imaging), PET (positron emission tomography), EUS (endoscopic ultrasound), EUS elastography, and EUS-guided biopsy or FNA (fine-needle aspiration)) and reference standard (confirmation of the nature of the lesion was obtained by histopathological examination of the entire lesion by surgical excision, or histopathological examination for confirmation of precancer or cancer by biopsy and clinical follow-up of at least six months in people with negative index tests) in people with pancreatic lesions irrespective of language or publication status or whether the data were collected prospectively or retrospectively. Two review authors independently searched the references to identify relevant studies and extracted the data. We planned to use the bivariate analysis to calculate the summary sensitivity and specificity with their 95% confidence intervals and the hierarchical summary receiver operating characteristic (HSROC) to compare the tests and assess heterogeneity, but used simpler models (such as univariate random-effects model and univariate fixed-effect model) for combining studies when appropriate because of the sparse data. We were unable to compare the diagnostic performance of the tests using formal statistical methods because of sparse data. We included 54 studies involving a total of 3,196 participants evaluating the diagnostic accuracy of various index tests. In these 54 studies, eight different target conditions were identified with different final diagnoses constituting benign, precancerous, and cancerous lesions. None of the studies was of high methodological quality. None of the comparisons in which single studies were included was of sufficiently high methodological quality to warrant highlighting of the results. For differentiation of cancerous lesions from benign or precancerous lesions, we identified only one study per index test. The second analysis, of studies differentiating cancerous versus benign lesions, provided three tests in which meta-analysis could be performed. The sensitivities and specificities for diagnosing cancer were: EUS-FNA: sensitivity 0.79 (95% confidence interval (CI) 0.07 to 1.00), specificity 1.00 (95% CI 0.91 to 1.00); EUS: sensitivity 0.95 (95% CI 0.84 to 0.99), specificity 0.53 (95% CI 0.31 to 0.74); PET: sensitivity 0.92 (95% CI 0.80 to 0.97), specificity 0.65 (95% CI 0.39 to 0.84). The third analysis, of studies differentiating precancerous or cancerous lesions from benign lesions, only provided one test (EUS-FNA) in which meta-analysis was performed. EUS-FNA had moderate sensitivity for diagnosing precancerous or cancerous lesions (sensitivity 0.73 (95% CI 0.01 to 1.00) and high specificity 0.94 (95% CI 0.15 to 1.00), the extremely wide confidence intervals reflecting the heterogeneity between the studies). The fourth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (dysplasia) provided three tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing invasive carcinoma were: CT: sensitivity 0.72 (95% CI 0.50 to 0.87), specificity 0.92 (95% CI 0.81 to 0.97); EUS: sensitivity 0.78 (95% CI 0.44 to 0.94), specificity 0.91 (95% CI 0.61 to 0.98); EUS-FNA: sensitivity 0.66 (95% CI 0.03 to 0.99), specificity 0.92 (95% CI 0.73 to 0.98). The fifth analysis, of studies differentiating cancerous (high-grade dysplasia or invasive carcinoma) versus precancerous (low- or intermediate-grade dysplasia) provided six tests in which meta-analysis was performed. The sensitivities and specificities for diagnosing cancer (high-grade dysplasia or invasive carcinoma) were: CT: sensitivity 0.87 (95% CI 0.00 to 1.00), specificity 0.96 (95% CI 0.00 to 1.00); EUS: sensitivity 0.86 (95% CI 0.74 to 0.92), specificity 0.91 (95% CI 0.83 to 0.96); EUS-FNA: sensitivity 0.47 (95% CI 0.24 to 0.70), specificity 0.91 (95% CI 0.32 to 1.00); EUS-FNA carcinoembryonic antigen 200 ng/mL: sensitivity 0.58 (95% CI 0.28 to 0.83), specificity 0.51 (95% CI 0.19 to 0.81); MRI: sensitivity 0.69 (95% CI 0.44 to 0.86), specificity 0.93 (95% CI 0.43 to 1.00); PET: sensitivity 0.90 (95% CI 0.79 to 0.96), specificity 0.94 (95% CI 0.81 to 0.99). The sixth analysis, of studies differentiating cancerous (invasive carcinoma) from precancerous (low-grade dysplasia) provided no tests in which meta-analysis was performed. The seventh analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) provided two tests in which meta-analysis was performed. The sensitivity and specificity for diagnosing cancer were: CT: sensitivity 0.83 (95% CI 0.68 to 0.92), specificity 0.83 (95% CI 0.64 to 0.93) and MRI: sensitivity 0.80 (95% CI 0.58 to 0.92), specificity 0.81 (95% CI 0.53 to 0.95), respectively. The eighth analysis, of studies differentiating precancerous or cancerous (intermediate- or high-grade dysplasia or invasive carcinoma) from precancerous (low-grade dysplasia) or benign lesions provided no test in which meta-analysis was performed.There were no major alterations in the subgroup analysis of cystic pancreatic focal lesions (42 studies; 2086 participants). None of the included studies evaluated EUS elastography or sequential testing. We were unable to arrive at any firm conclusions because of the differences in the way that study authors classified focal pancreatic lesions into cancerous, precancerous, and benign lesions; the inclusion of few studies with wide confidence intervals for each comparison; poor methodological quality in the studies; and heterogeneity in the estimates within comparisons.

  8. Design sensitivity analysis and optimization tool (DSO) for sizing design applications

    NASA Technical Reports Server (NTRS)

    Chang, Kuang-Hua; Choi, Kyung K.; Perng, Jyh-Hwa

    1992-01-01

    The DSO tool, a structural design software system that provides the designer with a graphics-based menu-driven design environment to perform easy design optimization for general applications, is presented. Three design stages, preprocessing, design sensitivity analysis, and postprocessing, are implemented in the DSO to allow the designer to carry out the design process systematically. A framework, including data base, user interface, foundation class, and remote module, has been designed and implemented to facilitate software development for the DSO. A number of dedicated commercial software/packages have been integrated in the DSO to support the design procedures. Instead of parameterizing an FEM, design parameters are defined on a geometric model associated with physical quantities, and the continuum design sensitivity analysis theory is implemented to compute design sensitivity coefficients using postprocessing data from the analysis codes. A tracked vehicle road wheel is given as a sizing design application to demonstrate the DSO's easy and convenient design optimization process.

  9. Comparative Sensitivity Analysis of Muscle Activation Dynamics

    PubMed Central

    Günther, Michael; Götz, Thomas

    2015-01-01

    We mathematically compared two models of mammalian striated muscle activation dynamics proposed by Hatze and Zajac. Both models are representative for a broad variety of biomechanical models formulated as ordinary differential equations (ODEs). These models incorporate parameters that directly represent known physiological properties. Other parameters have been introduced to reproduce empirical observations. We used sensitivity analysis to investigate the influence of model parameters on the ODE solutions. In addition, we expanded an existing approach to treating initial conditions as parameters and to calculating second-order sensitivities. Furthermore, we used a global sensitivity analysis approach to include finite ranges of parameter values. Hence, a theoretician striving for model reduction could use the method for identifying particularly low sensitivities to detect superfluous parameters. An experimenter could use it for identifying particularly high sensitivities to improve parameter estimation. Hatze's nonlinear model incorporates some parameters to which activation dynamics is clearly more sensitive than to any parameter in Zajac's linear model. Other than Zajac's model, Hatze's model can, however, reproduce measured shifts in optimal muscle length with varied muscle activity. Accordingly we extracted a specific parameter set for Hatze's model that combines best with a particular muscle force-length relation. PMID:26417379

  10. Accuracy of computed tomographic features in differentiating intestinal tuberculosis from Crohn's disease: a systematic review with meta-analysis.

    PubMed

    Kedia, Saurabh; Sharma, Raju; Sreenivas, Vishnubhatla; Madhusudhan, Kumble Seetharama; Sharma, Vishal; Bopanna, Sawan; Pratap Mouli, Venigalla; Dhingra, Rajan; Yadav, Dawesh Prakash; Makharia, Govind; Ahuja, Vineet

    2017-04-01

    Abdominal computed tomography (CT) can noninvasively image the entire gastrointestinal tract and assess extraintestinal features that are important in differentiating Crohn's disease (CD) and intestinal tuberculosis (ITB). The present meta-analysis pooled the results of all studies on the role of CT abdomen in differentiating between CD and ITB. We searched PubMed and Embase for all publications in English that analyzed the features differentiating between CD and ITB on abdominal CT. The features included comb sign, necrotic lymph nodes, asymmetric bowel wall thickening, skip lesions, fibrofatty proliferation, mural stratification, ileocaecal area, long segment, and left colonic involvements. Sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratio (DOR) were calculated for all the features. Symmetric receiver operating characteristic curve was plotted for features present in >3 studies. Heterogeneity and publication bias was assessed and sensitivity analysis was performed by excluding studies that compared features on conventional abdominal CT instead of CT enterography (CTE). We included 6 studies (4 CTE, 1 conventional abdominal CT, and 1 CTE+conventional abdominal CT) involving 417 and 195 patients with CD and ITB, respectively. Necrotic lymph nodes had the highest diagnostic accuracy (sensitivity, 23%; specificity, 100%; DOR, 30.2) for ITB diagnosis, and comb sign (sensitivity, 82%; specificity, 81%; DOR, 21.5) followed by skip lesions (sensitivity, 86%; specificity, 74%; DOR, 16.5) had the highest diagnostic accuracy for CD diagnosis. On sensitivity analysis, the diagnostic accuracy of other features excluding asymmetric bowel wall thickening remained similar. Necrotic lymph nodes and comb sign on abdominal CT had the best diagnostic accuracy in differentiating CD and ITB.

  11. The effects of abdominal lipectomy in metabolic syndrome components and insulin sensitivity in females: A systematic review and meta-analysis.

    PubMed

    Seretis, Konstantinos; Goulis, Dimitrios G; Koliakos, Georgios; Demiri, Efterpi

    2015-12-01

    Adipose tissue is an endocrine organ, which is implicated in the pathogenesis of obesity, metabolic syndrome and diabetes. Lipectomy offers a unique opportunity to permanently reduce the absolute number of fat cells, though its functional role remains unclear. This systematic and meta-analysis review aims to assess the effect of abdominal lipectomy on metabolic syndrome components and insulin sensitivity in women. A predetermined protocol, established according to the Cochrane Handbook's recommendations, was used. An electronic search in MEDLINE, Scopus, the Cochrane Library and CENTRAL electronic databases was conducted from inception to May 14, 2015. This search was supplemented by a review of reference lists of potentially eligible studies and a manual search of key journals in the field of plastic surgery. Eligible studies were prospective studies with ≥1month of follow-up that included females only who underwent abdominal lipectomy and reported on parameters of metabolic syndrome and insulin sensitivity. The systematic review included 11 studies with a total of 271 individuals. Conflicting results were revealed, though most studies showed no significant metabolic effects after lipectomy. The meta-analysis included 4 studies with 140 subjects. No significant changes were revealed between lipectomy and control groups. This meta-analysis provides evidence that abdominal lipectomy in females does not affect significantly the components of metabolic syndrome and insulin sensitivity. Further high quality studies are needed to elucidate the potential metabolic effects of abdominal lipectomy. Systematic review registration PROSPERO CRD42015017564 (www.crd.york.ac.uk/PROSPERO). Copyright © 2015 Elsevier Inc. All rights reserved.

  12. LSENS, a general chemical kinetics and sensitivity analysis code for homogeneous gas-phase reactions. 2: Code description and usage

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 2 of a series of three reference publications that describe LSENS, provide a detailed guide to its usage, and present many example problems. Part 2 describes the code, how to modify it, and its usage, including preparation of the problem data file required to execute LSENS. Code usage is illustrated by several example problems, which further explain preparation of the problem data file and show how to obtain desired accuracy in the computed results. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions. Part 1 (NASA RP-1328) derives the governing equations describes the numerical solution procedures for the types of problems that can be solved by lSENS. Part 3 (NASA RP-1330) explains the kinetics and kinetics-plus-sensitivity-analysis problems supplied with LSENS and presents sample results.

  13. An Equal Employment Opportunity Sensitivity Workshop

    ERIC Educational Resources Information Center

    Patten, Thomas H., Jr.; Dorey, Lester E.

    1972-01-01

    The equal employment opportunity sensitivity workshop seems to be a useful training device for getting an organization started on developing black and white change agents. A report on the establishment of such a workshop at the U.S. Army Tank Automotive Command (TACOM). Includes charts of design, characteristics, analysis of results, program…

  14. The association between paternal sensitivity and infant-father attachment security: a meta-analysis of three decades of research.

    PubMed

    Lucassen, Nicole; Tharner, Anne; Van Ijzendoorn, Marinus H; Bakermans-Kranenburg, Marian J; Volling, Brenda L; Verhulst, Frank C; Lambregtse-Van den Berg, Mijke P; Tiemeier, Henning

    2011-12-01

    For almost three decades, the association between paternal sensitivity and infant-father attachment security has been studied. The first wave of studies on the correlates of infant-father attachment showed a weak association between paternal sensitivity and infant-father attachment security (r = .13, p < .001, k = 8, N = 546). In the current paper, a meta-analysis of the association between paternal sensitivity and infant-father attachment based on all studies currently available is presented, and the change over time of the association between paternal sensitivity and infant-father attachment is investigated. Studies using an observational measure of paternal interactive behavior with the infant, and the Strange Situation Procedure to observe the attachment relationship were included. Paternal sensitivity is differentiated from paternal sensitivity combined with stimulation in the interaction with the infant. Higher levels of paternal sensitivity were associated with more infant-father attachment security (r = .12, p < .001, k = 16, N = 1,355). Fathers' sensitive play combined with stimulation was not more strongly associated with attachment security than sensitive interactions without stimulation of play. Despite possible changes in paternal role patterns, we did not find stronger associations between paternal sensitivity and infant attachment in more recent years.

  15. Modeling, design, packing and experimental analysis of liquid-phase shear-horizontal surface acoustic wave sensors

    NASA Astrophysics Data System (ADS)

    Pollard, Thomas B

    Recent advances in microbiology, computational capabilities, and microelectromechanical-system fabrication techniques permit modeling, design, and fabrication of low-cost, miniature, sensitive and selective liquid-phase sensors and lab-on-a-chip systems. Such devices are expected to replace expensive, time-consuming, and bulky laboratory-based testing equipment. Potential applications for devices include: fluid characterization for material science and industry; chemical analysis in medicine and pharmacology; study of biological processes; food analysis; chemical kinetics analysis; and environmental monitoring. When combined with liquid-phase packaging, sensors based on surface-acoustic-wave (SAW) technology are considered strong candidates. For this reason such devices are focused on in this work; emphasis placed on device modeling and packaging for liquid-phase operation. Regarding modeling, topics considered include mode excitation efficiency of transducers; mode sensitivity based on guiding structure materials/geometries; and use of new piezoelectric materials. On packaging, topics considered include package interfacing with SAW devices, and minimization of packaging effects on device performance. In this work novel numerical models are theoretically developed and implemented to study propagation and transduction characteristics of sensor designs using wave/constitutive equations, Green's functions, and boundary/finite element methods. Using developed simulation tools that consider finite-thickness of all device electrodes, transduction efficiency for SAW transducers with neighboring uniform or periodic guiding electrodes is reported for the first time. Results indicate finite electrode thickness strongly affects efficiency. Using dense electrodes, efficiency is shown to approach 92% and 100% for uniform and periodic electrode guiding, respectively; yielding improved sensor detection limits. A numerical sensitivity analysis is presented targeting viscosity using uniform-electrode and shear-horizontal mode configurations on potassium-niobate, langasite, and quartz substrates. Optimum configurations are determined yielding maximum sensitivity. Results show mode propagation-loss and sensitivity to viscosity are correlated by a factor independent of substrate material. The analysis is useful for designing devices meeting sensitivity and signal level requirements. A novel, rapid and precise microfluidic chamber alignment/bonding method was developed for SAW platforms. The package is shown to have little effect on device performance and permits simple macrofluidic interfacing. Lastly, prototypes were designed, fabricated, and tested for viscosity and biosensor applications; results show ability to detect as low as 1% glycerol in water and surface-bound DNA crosslinking.

  16. Surrogacy of progression-free survival (PFS) for overall survival (OS) in esophageal cancer trials with preoperative therapy: Literature-based meta-analysis.

    PubMed

    Kataoka, K; Nakamura, K; Mizusawa, J; Kato, K; Eba, J; Katayama, H; Shibata, T; Fukuda, H

    2017-10-01

    There have been no reports evaluating progression-free survival (PFS) as a surrogate endpoint in resectable esophageal cancer. This study was conducted to evaluate the trial level correlations between PFS and overall survival (OS) in resectable esophageal cancer with preoperative therapy and to explore the potential benefit of PFS as a surrogate endpoint for OS. A systematic literature search of randomized trials with preoperative chemotherapy or preoperative chemoradiotherapy for esophageal cancer reported from January 1990 to September 2014 was conducted using PubMed and the Cochrane Library. Weighted linear regression using sample size of each trial as a weight was used to estimate coefficient of determination (R 2 ) within PFS and OS. The primary analysis included trials in which the HR for both PFS and OS was reported. The sensitivity analysis included trials in which either HR or median survival time of PFS and OS was reported. In the sensitivity analysis, HR was estimated from the median survival time of PFS and OS, assuming exponential distribution. Of 614 articles, 10 trials were selected for the primary analysis and 15 for the sensitivity analysis. The primary analysis did not show a correlation between treatment effects on PFS and OS (R 2 0.283, 95% CI [0.00-0.90]). The sensitivity analysis did not show an association between PFS and OS (R 2 0.084, 95% CI [0.00-0.70]). Although the number of randomized controlled trials evaluating preoperative therapy for esophageal cancer is limited at the moment, PFS is not suitable for primary endpoint as a surrogate endpoint for OS. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.

  17. Is High Resolution Melting Analysis (HRMA) Accurate for Detection of Human Disease-Associated Mutations? A Meta Analysis

    PubMed Central

    Ma, Feng-Li; Jiang, Bo; Song, Xiao-Xiao; Xu, An-Gao

    2011-01-01

    Background High Resolution Melting Analysis (HRMA) is becoming the preferred method for mutation detection. However, its accuracy in the individual clinical diagnostic setting is variable. To assess the diagnostic accuracy of HRMA for human mutations in comparison to DNA sequencing in different routine clinical settings, we have conducted a meta-analysis of published reports. Methodology/Principal Findings Out of 195 publications obtained from the initial search criteria, thirty-four studies assessing the accuracy of HRMA were included in the meta-analysis. We found that HRMA was a highly sensitive test for detecting disease-associated mutations in humans. Overall, the summary sensitivity was 97.5% (95% confidence interval (CI): 96.8–98.5; I2 = 27.0%). Subgroup analysis showed even higher sensitivity for non-HR-1 instruments (sensitivity 98.7% (95%CI: 97.7–99.3; I2 = 0.0%)) and an eligible sample size subgroup (sensitivity 99.3% (95%CI: 98.1–99.8; I2 = 0.0%)). HRMA specificity showed considerable heterogeneity between studies. Sensitivity of the techniques was influenced by sample size and instrument type but by not sample source or dye type. Conclusions/Significance These findings show that HRMA is a highly sensitive, simple and low-cost test to detect human disease-associated mutations, especially for samples with mutations of low incidence. The burden on DNA sequencing could be significantly reduced by the implementation of HRMA, but it should be recognized that its sensitivity varies according to the number of samples with/without mutations, and positive results require DNA sequencing for confirmation. PMID:22194806

  18. Could ginseng-based medicines be better than nitrates in treating ischemic heart disease? A systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Jia, Yongliang; Zhang, Shikai; Huang, Fangyi; Leung, Siu-wai

    2012-06-01

    Ginseng-based medicines and nitrates are commonly used in treating ischemic heart disease (IHD) angina pectoris in China. Hundreds of randomized controlled trials (RCTs) reported in Chinese language claimed that ginseng-based medicines can relieve the symptoms of IHD. This study provides the first PRISMA-compliant systematic review with sensitivity and subgroup analyses to evaluate the RCTs comparing the efficacies of ginseng-based medicines and nitrates in treating ischemic heart disease, particularly angina pectoris. Past RCTs published up to 2010 on ginseng versus nitrates in treating IHD for 14 or more days were retrieved from major English and Chinese databases, including PubMed, Science Direct, Cochrane Library, WangFang Data, and Chinese National Knowledge Infrastructure. The qualities of included RCTs were assessed with Jadad scale, a refined Jadad scale called M scale, CONSORT 2010 checklist, and Cochrane risk of bias tool. Meta-analysis was performed on the primary outcomes including the improvement of symptoms and electrocardiography (ECG). Subgroup analysis, sensitivity analysis, and meta-regression were performed to evaluate the effects of study characteristics of RCTs, including quality, follow-up periods, and efficacy definitions on the overall effect size of ginseng. Eighteen RCTs with 1549 participants were included. Overall odds ratios for comparing ginseng-based medicines with nitrates were 3.00 (95% CI: 2.27-3.96) in symptom improvement (n=18) and 1.61 (95% CI: 1.20-2.15) in ECG improvement (n=10). Subgroup analysis, sensitivity analysis, and meta-regression found no significant difference in overall effects among all study characteristics, indicating that the overall effects were stable. The meta-analysis of 18 eligible RCTs demonstrates moderate evidence that ginseng is more effective than nitrates for treating angina pectoris. However, further RCTs for higher quality, longer follow-up periods, lager sample size, multi-center/country, and are still required to verify the efficacy. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  19. Nursing-sensitive indicators: a concept analysis

    PubMed Central

    Heslop, Liza; Lu, Sai

    2014-01-01

    Aim To report a concept analysis of nursing-sensitive indicators within the applied context of the acute care setting. Background The concept of ‘nursing sensitive indicators’ is valuable to elaborate nursing care performance. The conceptual foundation, theoretical role, meaning, use and interpretation of the concept tend to differ. The elusiveness of the concept and the ambiguity of its attributes may have hindered research efforts to advance its application in practice. Design Concept analysis. Data sources Using ‘clinical indicators’ or ‘quality of nursing care’ as subject headings and incorporating keyword combinations of ‘acute care’ and ‘nurs*’, CINAHL and MEDLINE with full text in EBSCOhost databases were searched for English language journal articles published between 2000–2012. Only primary research articles were selected. Methods A hybrid approach was undertaken, incorporating traditional strategies as per Walker and Avant and a conceptual matrix based on Holzemer's Outcomes Model for Health Care Research. Results The analysis revealed two main attributes of nursing-sensitive indicators. Structural attributes related to health service operation included: hours of nursing care per patient day, nurse staffing. Outcome attributes related to patient care included: the prevalence of pressure ulcer, falls and falls with injury, nosocomial selective infection and patient/family satisfaction with nursing care. Conclusion This concept analysis may be used as a basis to advance understandings of the theoretical structures that underpin both research and practical application of quality dimensions of nursing care performance. PMID:25113388

  20. LASER BIOLOGY AND MEDICINE: Application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air

    NASA Astrophysics Data System (ADS)

    Stepanov, E. V.; Milyaev, Varerii A.

    2002-11-01

    The application of tunable diode lasers for a highly sensitive analysis of gaseous biomarkers in exhaled air in biomedical diagnostics is discussed. The principle of operation and the design of a laser analyser for studying the composition of exhaled air are described. The results of detection of gaseous biomarkers in exhaled air, including clinical studies, which demonstrate the diagnostic possibilities of the method, are presented.

  1. LSENS, a general chemical kinetics and sensitivity analysis code for gas-phase reactions: User's guide

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan; Bittker, David A.

    1993-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.

  2. Detector Development for the MARE Neutrino Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Galeazzi, M.; Bogorin, D.; Molina, R.

    2009-12-16

    The MARE experiment is designed to measure the mass of the neutrino with sub-eV sensitivity by measuring the beta decay of {sup 187}Re with cryogenic microcalorimeters. A preliminary analysis shows that, to achieve the necessary statistics, between 10,000 and 50,000 detectors are likely necessary. We have fabricated and characterized Iridium transition edge sensors with high reproducibility and uniformity for such a large scale experiment. We have also started a full scale simulation of the experimental setup for MARE, including thermalization in the absorber, detector response, and optimum filter analysis, to understand the issues related to reaching a sub-eV sensitivity andmore » to optimize the design of the MARE experiment. We present our characterization of the Ir devices, including reproducibility, uniformity, and sensitivity, and we discuss the implementation and capabilities of our full scale simulation.« less

  3. An Evaluation of the Potential for Shifting of Freight from Truck to Rail and Its Impacts on Energy Use and GHG Emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Yan; Vyas, Anant D.; Guo, Zhaomiao

    This report summarizes our evaluation of the potential energy-use and GHG-emissions reduction achieved by shifting freight from truck to rail under a most-likely scenario. A sensitivity analysis is also included. The sensitivity analysis shows changes in energy use and GHG emissions when key parameters are varied. The major contribution and distinction from previous studies is that this study considers the rail level of service (LOS) and commodity movements at the origin-destination (O-D) level. In addition, this study considers the fragility and time sensitivity of each commodity type.

  4. Satellite services system analysis study. Volume 2: Satellite and services user model

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Satellite services needs are analyzed. Topics include methodology: a satellite user model; representative servicing scenarios; potential service needs; manned, remote, and automated involvement; and inactive satellites/debris. Satellite and services user model development is considered. Groundrules and assumptions, servicing, events, and sensitivity analysis are included. Selection of references satellites is also discussed.

  5. Colonic lesion characterization in inflammatory bowel disease: A systematic review and meta-analysis

    PubMed Central

    Lord, Richard; Burr, Nicholas E; Mohammed, Noor; Subramanian, Venkataraman

    2018-01-01

    AIM To perform a systematic review and meta-analysis for the diagnostic accuracy of in vivo lesion characterization in colonic inflammatory bowel disease (IBD), using optical imaging techniques, including virtual chromoendoscopy (VCE), dye-based chromoendoscopy (DBC), magnification endoscopy and confocal laser endomicroscopy (CLE). METHODS We searched Medline, Embase and the Cochrane library. We performed a bivariate meta-analysis to calculate the pooled estimate sensitivities, specificities, positive and negative likelihood ratios (+LHR, -LHR), diagnostic odds ratios (DOR), and area under the SROC curve (AUSROC) for each technology group. A subgroup analysis was performed to investigate differences in real-time non-magnified Kudo pit patterns (with VCE and DBC) and real-time CLE. RESULTS We included 22 studies [1491 patients; 4674 polyps, of which 539 (11.5%) were neoplastic]. Real-time CLE had a pooled sensitivity of 91% (95%CI: 66%-98%), specificity of 97% (95%CI: 94%-98%), and an AUSROC of 0.98 (95%CI: 0.97-0.99). Magnification endoscopy had a pooled sensitivity of 90% (95%CI: 77%-96%) and specificity of 87% (95%CI: 81%-91%). VCE had a pooled sensitivity of 86% (95%CI: 62%-95%) and specificity of 87% (95%CI: 72%-95%). DBC had a pooled sensitivity of 67% (95%CI: 44%-84%) and specificity of 86% (95%CI: 72%-94%). CONCLUSION Real-time CLE is a highly accurate technology for differentiating neoplastic from non-neoplastic lesions in patients with colonic IBD. However, most CLE studies were performed by single expert users within tertiary centres, potentially confounding these results. PMID:29563760

  6. Simulation analysis of an integrated model for dynamic cellular manufacturing system

    NASA Astrophysics Data System (ADS)

    Hao, Chunfeng; Luan, Shichao; Kong, Jili

    2017-05-01

    Application of dynamic cellular manufacturing system (DCMS) is a well-known strategy to improve manufacturing efficiency in the production environment with high variety and low volume of production. Often, neither the trade-off of inter and intra-cell material movements nor the trade-off of hiring and firing of operators are examined in details. This paper presents simulation results of an integrated mixed-integer model including sensitivity analysis for several numerical examples. The comprehensive model includes cell formation, inter and intracellular materials handling, inventory and backorder holding, operator assignment (including resource adjustment) and flexible production routing. The model considers multi-production planning with flexible resources (machines and operators) where each period has different demands. The results verify the validity and sensitivity of the proposed model using a genetic algorithm.

  7. Design sensitivity analysis of rotorcraft airframe structures for vibration reduction

    NASA Technical Reports Server (NTRS)

    Murthy, T. Sreekanta

    1987-01-01

    Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.

  8. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  9. Uncertainty and Sensitivity Analysis of Afterbody Radiative Heating Predictions for Earth Entry

    NASA Technical Reports Server (NTRS)

    West, Thomas K., IV; Johnston, Christopher O.; Hosder, Serhat

    2016-01-01

    The objective of this work was to perform sensitivity analysis and uncertainty quantification for afterbody radiative heating predictions of Stardust capsule during Earth entry at peak afterbody radiation conditions. The radiation environment in the afterbody region poses significant challenges for accurate uncertainty quantification and sensitivity analysis due to the complexity of the flow physics, computational cost, and large number of un-certain variables. In this study, first a sparse collocation non-intrusive polynomial chaos approach along with global non-linear sensitivity analysis was used to identify the most significant uncertain variables and reduce the dimensions of the stochastic problem. Then, a total order stochastic expansion was constructed over only the important parameters for an efficient and accurate estimate of the uncertainty in radiation. Based on previous work, 388 uncertain parameters were considered in the radiation model, which came from the thermodynamics, flow field chemistry, and radiation modeling. The sensitivity analysis showed that only four of these variables contributed significantly to afterbody radiation uncertainty, accounting for almost 95% of the uncertainty. These included the electronic- impact excitation rate for N between level 2 and level 5 and rates of three chemical reactions in uencing N, N(+), O, and O(+) number densities in the flow field.

  10. The dream of a one-stop-shop: Meta-analysis on myocardial perfusion CT.

    PubMed

    Pelgrim, Gert Jan; Dorrius, Monique; Xie, Xueqian; den Dekker, Martijn A M; Schoepf, U Joseph; Henzler, Thomas; Oudkerk, Matthijs; Vliegenthart, Rozemarijn

    2015-12-01

    To determine the diagnostic performance of computed tomography (CT) perfusion techniques for the detection of functionally relevant coronary artery disease (CAD) in comparison to reference standards, including invasive coronary angiography (ICA), single photon emission computed tomography (SPECT), and magnetic resonance imaging (MRI). PubMed, Web of Knowledge and Embase were searched from January 1, 1998 until July 1, 2014. The search yielded 9475 articles. After duplicate removal, 6041 were screened on title and abstract. The resulting 276 articles were independently analyzed in full-text by two reviewers, and included if the inclusion criteria were met. The articles reporting diagnostic parameters including true positive, true negative, false positive and false negative were subsequently evaluated for the meta-analysis. Results were pooled according to CT perfusion technique, namely snapshot techniques: single-phase rest, single-phase stress, single-phase dual-energy stress and combined coronary CT angiography [rest] and single-phase stress, as well the dynamic technique: dynamic stress CT perfusion. Twenty-two articles were included in the meta-analysis (1507 subjects). Pooled per-patient sensitivity and specificity of single-phase rest CT compared to rest SPECT were 89% (95% confidence interval [CI], 82-94%) and 88% (95% CI, 78-94%), respectively. Vessel-based sensitivity and specificity of single-phase stress CT compared to ICA-based >70% stenosis were 82% (95% CI, 64-92%) and 78% (95% CI, 61-89%). Segment-based sensitivity and specificity of single-phase dual-energy stress CT in comparison to stress MRI were 75% (95% CI, 60-85%) and 95% (95% CI, 80-99%). Segment-based sensitivity and specificity of dynamic stress CT perfusion compared to stress SPECT were 77% (95% CI, 67-85) and 89% (95% CI, 78-95%). For combined coronary CT angiography and single-phase stress CT, vessel-based sensitivity and specificity in comparison to ICA-based >50% stenosis were 84% (95% CI, 67-93%) and 93% (95% CI, 89-96%). This meta-analysis shows considerable variation in techniques and reference standards for CT of myocardial blood supply. While CT seems sensitive and specific for evaluation of hemodynamically relevant CAD, studies so far are limited in size. Standardization of myocardial perfusion CT technique is essential. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Cost-effectiveness analysis of EGFR mutation testing in patients with non-small cell lung cancer (NSCLC) with gefitinib or carboplatin-paclitaxel.

    PubMed

    Arrieta, Oscar; Anaya, Pablo; Morales-Oyarvide, Vicente; Ramírez-Tirado, Laura Alejandra; Polanco, Ana C

    2016-09-01

    Assess the cost-effectiveness of an EGFR-mutation testing strategy for advanced NSCLC in first-line therapy with either gefitinib or carboplatin-paclitaxel in Mexican institutions. Cost-effectiveness analysis using a discrete event simulation (DES) model to simulate two therapeutic strategies in patients with advanced NSCLC. Strategy one included patients tested for EGFR-mutation and therapy given accordingly. Strategy two included chemotherapy for all patients without testing. All results are presented in 2014 US dollars. The analysis was made with data from the Mexican frequency of EGFR-mutation. A univariate sensitivity analysis was conducted on EGFR prevalence. Progression-free survival (PFS) transition probabilities were estimated on data from the IPASS and simulated with a Weibull distribution, run with parallel trials to calculate a probabilistic sensitivity analysis. PFS of patients in the testing strategy was 6.76 months (95 % CI 6.10-7.44) vs 5.85 months (95 % CI 5.43-6.29) in the non-testing group. The one-way sensitivity analysis showed that PFS has a direct relationship with EGFR-mutation prevalence, while the ICER and testing cost have an inverse relationship with EGFR-mutation prevalence. The probabilistic sensitivity analysis showed that all iterations had incremental costs and incremental PFS for strategy 1 in comparison with strategy 2. There is a direct relationship between the ICER and the cost of EGFR testing, with an inverse relationship with the prevalence of EGFR-mutation. When prevalence is >10 % ICER remains constant. This study could impact Mexican and Latin American health policies regarding mutation detection testing and treatment for advanced NSCLC.

  12. LSENS: A General Chemical Kinetics and Sensitivity Analysis Code for homogeneous gas-phase reactions. Part 1: Theory and numerical solution procedures

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    1994-01-01

    LSENS, the Lewis General Chemical Kinetics and Sensitivity Analysis Code, has been developed for solving complex, homogeneous, gas-phase chemical kinetics problems and contains sensitivity analysis for a variety of problems, including nonisothermal situations. This report is part 1 of a series of three reference publications that describe LENS, provide a detailed guide to its usage, and present many example problems. Part 1 derives the governing equations and describes the numerical solution procedures for the types of problems that can be solved. The accuracy and efficiency of LSENS are examined by means of various test problems, and comparisons with other methods and codes are presented. LSENS is a flexible, convenient, accurate, and efficient solver for chemical reaction problems such as static system; steady, one-dimensional, inviscid flow; reaction behind incident shock wave, including boundary layer correction; and perfectly stirred (highly backmixed) reactor. In addition, the chemical equilibrium state can be computed for the following assigned states: temperature and pressure, enthalpy and pressure, temperature and volume, and internal energy and volume. For static problems the code computes the sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of the dependent variables and/or the three rate coefficient parameters of the chemical reactions.

  13. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis.

    PubMed

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement.

  14. Designing novel cellulase systems through agent-based modeling and global sensitivity analysis

    PubMed Central

    Apte, Advait A; Senger, Ryan S; Fong, Stephen S

    2014-01-01

    Experimental techniques allow engineering of biological systems to modify functionality; however, there still remains a need to develop tools to prioritize targets for modification. In this study, agent-based modeling (ABM) was used to build stochastic models of complexed and non-complexed cellulose hydrolysis, including enzymatic mechanisms for endoglucanase, exoglucanase, and β-glucosidase activity. Modeling results were consistent with experimental observations of higher efficiency in complexed systems than non-complexed systems and established relationships between specific cellulolytic mechanisms and overall efficiency. Global sensitivity analysis (GSA) of model results identified key parameters for improving overall cellulose hydrolysis efficiency including: (1) the cellulase half-life, (2) the exoglucanase activity, and (3) the cellulase composition. Overall, the following parameters were found to significantly influence cellulose consumption in a consolidated bioprocess (CBP): (1) the glucose uptake rate of the culture, (2) the bacterial cell concentration, and (3) the nature of the cellulase enzyme system (complexed or non-complexed). Broadly, these results demonstrate the utility of combining modeling and sensitivity analysis to identify key parameters and/or targets for experimental improvement. PMID:24830736

  15. Comparison of the diagnostic ability of Moorfield’s regression analysis and glaucoma probability score using Heidelberg retinal tomograph III in eyes with primary open angle glaucoma

    PubMed Central

    Jindal, Shveta; Dada, Tanuj; Sreenivas, V; Gupta, Viney; Sihota, Ramanjit; Panda, Anita

    2010-01-01

    Purpose: To compare the diagnostic performance of the Heidelberg retinal tomograph (HRT) glaucoma probability score (GPS) with that of Moorfield’s regression analysis (MRA). Materials and Methods: The study included 50 eyes of normal subjects and 50 eyes of subjects with early-to-moderate primary open angle glaucoma. Images were obtained by using HRT version 3.0. Results: The agreement coefficient (weighted k) for the overall MRA and GPS classification was 0.216 (95% CI: 0.119 – 0.315). The sensitivity and specificity were evaluated using the most specific (borderline results included as test negatives) and least specific criteria (borderline results included as test positives). The MRA sensitivity and specificity were 30.61 and 98% (most specific) and 57.14 and 98% (least specific). The GPS sensitivity and specificity were 81.63 and 73.47% (most specific) and 95.92 and 34.69% (least specific). The MRA gave a higher positive likelihood ratio (28.57 vs. 3.08) and the GPS gave a higher negative likelihood ratio (0.25 vs. 0.44).The sensitivity increased with increasing disc size for both MRA and GPS. Conclusions: There was a poor agreement between the overall MRA and GPS classifications. GPS tended to have higher sensitivities, lower specificities, and lower likelihood ratios than the MRA. The disc size should be taken into consideration when interpreting the results of HRT, as both the GPS and MRA showed decreased sensitivity for smaller discs and the GPS showed decreased specificity for larger discs. PMID:20952832

  16. What Do We Mean By Sensitivity Analysis? The Need For A Comprehensive Characterization Of Sensitivity In Earth System Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2014-12-01

    Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.

  17. Pain Sensitivity Subgroups in Individuals With Spine Pain: Potential Relevance to Short-Term Clinical Outcome

    PubMed Central

    Bialosky, Joel E.; Robinson, Michael E.

    2014-01-01

    Background Cluster analysis can be used to identify individuals similar in profile based on response to multiple pain sensitivity measures. There are limited investigations into how empirically derived pain sensitivity subgroups influence clinical outcomes for individuals with spine pain. Objective The purposes of this study were: (1) to investigate empirically derived subgroups based on pressure and thermal pain sensitivity in individuals with spine pain and (2) to examine subgroup influence on 2-week clinical pain intensity and disability outcomes. Design A secondary analysis of data from 2 randomized trials was conducted. Methods Baseline and 2-week outcome data from 157 participants with low back pain (n=110) and neck pain (n=47) were examined. Participants completed demographic, psychological, and clinical information and were assessed using pain sensitivity protocols, including pressure (suprathreshold pressure pain) and thermal pain sensitivity (thermal heat threshold and tolerance, suprathreshold heat pain, temporal summation). A hierarchical agglomerative cluster analysis was used to create subgroups based on pain sensitivity responses. Differences in data for baseline variables, clinical pain intensity, and disability were examined. Results Three pain sensitivity cluster groups were derived: low pain sensitivity, high thermal static sensitivity, and high pressure and thermal dynamic sensitivity. There were differences in the proportion of individuals meeting a 30% change in pain intensity, where fewer individuals within the high pressure and thermal dynamic sensitivity group (adjusted odds ratio=0.3; 95% confidence interval=0.1, 0.8) achieved successful outcomes. Limitations Only 2-week outcomes are reported. Conclusions Distinct pain sensitivity cluster groups for individuals with spine pain were identified, with the high pressure and thermal dynamic sensitivity group showing worse clinical outcome for pain intensity. Future studies should aim to confirm these findings. PMID:24764070

  18. Methods of determining complete sensor requirements for autonomous mobility

    NASA Technical Reports Server (NTRS)

    Curtis, Steven A. (Inventor)

    2012-01-01

    A method of determining complete sensor requirements for autonomous mobility of an autonomous system includes computing a time variation of each behavior of a set of behaviors of the autonomous system, determining mobility sensitivity to each behavior of the autonomous system, and computing a change in mobility based upon the mobility sensitivity to each behavior and the time variation of each behavior. The method further includes determining the complete sensor requirements of the autonomous system through analysis of the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior, wherein the relative magnitude of the change in mobility, the mobility sensitivity to each behavior, and the time variation of each behavior are characteristic of the stability of the autonomous system.

  19. Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators

    NASA Astrophysics Data System (ADS)

    Sykes, J. F.; Wilson, J. L.; Andrews, R. W.

    1985-03-01

    Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.

  20. 38 CFR 75.115 - Risk analysis.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... preparation of the risk analysis may include data mining if necessary for the development of relevant...) INFORMATION SECURITY MATTERS Data Breaches § 75.115 Risk analysis. If a data breach involving sensitive... possible after the data breach, a non-VA entity with relevant expertise in data breach assessment and risk...

  1. 38 CFR 75.115 - Risk analysis.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... preparation of the risk analysis may include data mining if necessary for the development of relevant...) INFORMATION SECURITY MATTERS Data Breaches § 75.115 Risk analysis. If a data breach involving sensitive... possible after the data breach, a non-VA entity with relevant expertise in data breach assessment and risk...

  2. 38 CFR 75.115 - Risk analysis.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... preparation of the risk analysis may include data mining if necessary for the development of relevant...) INFORMATION SECURITY MATTERS Data Breaches § 75.115 Risk analysis. If a data breach involving sensitive... possible after the data breach, a non-VA entity with relevant expertise in data breach assessment and risk...

  3. 38 CFR 75.115 - Risk analysis.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... preparation of the risk analysis may include data mining if necessary for the development of relevant...) INFORMATION SECURITY MATTERS Data Breaches § 75.115 Risk analysis. If a data breach involving sensitive... possible after the data breach, a non-VA entity with relevant expertise in data breach assessment and risk...

  4. Potential diagnostic value of serum p53 antibody for detecting colorectal cancer: A meta-analysis.

    PubMed

    Meng, Rongqin; Wang, Yang; He, Liang; He, Yuanqing; Du, Zedong

    2018-04-01

    Numerous studies have assessed the diagnostic value of serum p53 (s-p53) antibody in patients with colorectal cancer (CRC); however, results remain controversial. The present study aimed to comprehensively and quantitatively summarize the potential diagnostic value of s-p53 antibody in CRC. The present study utilized databases, including PubMed and EmBase, systematically regarding s-p53 antibody diagnosis in CRC, accessed on and prior to 31 July 2016. The quality of all the included studies was assessed using quality assessment of studies of diagnostic accuracy (QUADAS). The result of pooled sensitivity, pooled specificity, positive likelihood ratio (PLR) and negative likelihood ratio (NLR) were analyzed and compared with overall accuracy measures using diagnostic odds ratios (DORs) and area under the curve (AUC) analysis. Publication bias and heterogeneity were also assessed. A total of 11 trials that enrolled a combined 3,392 participants were included in the meta-analysis. Approximately 72.73% (8/11) of the included studies were of high quality (QUADAS score >7), and all were retrospective case-control studies. The pooled sensitivity was 0.19 [95% confidence interval (CI), 0.18-0.21] and pooled specificity was 0.93 (95% CI, 0.92-0.94). Results also demonstrated a PLR of 4.56 (95% CI, 3.27-6.34), NLR of 0.78 (95% CI, 0.71-0.85) and DOR of 6.70 (95% CI, 4.59-9.76). The symmetrical summary receiver operating characteristic curve was 0.73. Furthermore, no evidence of publication bias or heterogeneity was observed in the meta-analysis. Meta-analysis data indicated that s-p53 antibody possesses potential diagnostic value for CRC. However, discrimination power was somewhat limited due to the low sensitivity.

  5. SCALE Continuous-Energy Eigenvalue Sensitivity Coefficient Calculations

    DOE PAGES

    Perfetti, Christopher M.; Rearden, Bradley T.; Martin, William R.

    2016-02-25

    Sensitivity coefficients describe the fractional change in a system response that is induced by changes to system parameters and nuclear data. The Tools for Sensitivity and UNcertainty Analysis Methodology Implementation (TSUNAMI) code within the SCALE code system makes use of eigenvalue sensitivity coefficients for an extensive number of criticality safety applications, including quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different critical systems, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved fidelity and the desire to extend TSUNAMI analysis to advanced applications has motivated the developmentmore » of a methodology for calculating sensitivity coefficients in continuous-energy (CE) Monte Carlo applications. The Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Tracklength importance CHaracterization (CLUTCH) and Iterated Fission Probability (IFP) eigenvalue sensitivity methods were recently implemented in the CE-KENO framework of the SCALE code system to enable TSUNAMI-3D to perform eigenvalue sensitivity calculations using continuous-energy Monte Carlo methods. This work provides a detailed description of the theory behind the CLUTCH method and describes in detail its implementation. This work explores the improvements in eigenvalue sensitivity coefficient accuracy that can be gained through the use of continuous-energy sensitivity methods and also compares several sensitivity methods in terms of computational efficiency and memory requirements.« less

  6. The impact of standard and hard-coded parameters on the hydrologic fluxes in the Noah-MP land surface model

    NASA Astrophysics Data System (ADS)

    Thober, S.; Cuntz, M.; Mai, J.; Samaniego, L. E.; Clark, M. P.; Branch, O.; Wulfmeyer, V.; Attinger, S.

    2016-12-01

    Land surface models incorporate a large number of processes, described by physical, chemical and empirical equations. The agility of the models to react to different meteorological conditions is artificially constrained by having hard-coded parameters in their equations. Here we searched for hard-coded parameters in the computer code of the land surface model Noah with multiple process options (Noah-MP) to assess the model's agility during parameter estimation. We found 139 hard-coded values in all Noah-MP process options in addition to the 71 standard parameters. We performed a Sobol' global sensitivity analysis to variations of the standard and hard-coded parameters. The sensitivities of the hydrologic output fluxes latent heat and total runoff, their component fluxes, as well as photosynthesis and sensible heat were evaluated at twelve catchments of the Eastern United States with very different hydro-meteorological regimes. Noah-MP's output fluxes are sensitive to two thirds of its standard parameters. The most sensitive parameter is, however, a hard-coded value in the formulation of soil surface resistance for evaporation, which proved to be oversensitive in other land surface models as well. Latent heat and total runoff show very similar sensitivities towards standard and hard-coded parameters. They are sensitive to both soil and plant parameters, which means that model calibrations of hydrologic or land surface models should take both soil and plant parameters into account. Sensible and latent heat exhibit almost the same sensitivities so that calibration or sensitivity analysis can be performed with either of the two. Photosynthesis has almost the same sensitivities as transpiration, which are different from the sensitivities of latent heat. Including photosynthesis and latent heat in model calibration might therefore be beneficial. Surface runoff is sensitive to almost all hard-coded snow parameters. These sensitivities get, however, diminished in total runoff. It is thus recommended to include the most sensitive hard-coded model parameters that were exposed in this study when calibrating Noah-MP.

  7. SCALE Code System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  8. SCALE Code System 6.2.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely-used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor and lattice physics, radiation shielding, spent fuel and radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including three deterministicmore » and three Monte Carlo radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results.« less

  9. Comparison between Surrogate Indexes of Insulin Sensitivity/Resistance and Hyperinsulinemic Euglycemic Glucose Clamps in Rhesus Monkeys

    PubMed Central

    Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.

    2011-01-01

    The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021

  10. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  11. [Study on the current situation and influential factors of anxiety sensitivity among middle school students in Chongqing].

    PubMed

    Li, Qian-Qian; Zhang, Da-Jun; Guo, Lan-Ting; Feng, Zheng-Zhi; Wu, Ming-Xia

    2007-09-01

    To explore the status and influencing factors on anxiety sensitivity among middle school students in Chongqing. 58 classes from 12 schools were randomly selected in four administrative districts of Chongqing city. A total number of 2700 students was included for final analysis including 48.5% from junior high school and 51.5% from senior high school students with 49.2% boys and 50.8% girls. The Chinese version of the Anxiety Sensitivity Index-Revision, Adolescent Self-Rating Life Events Check List (ASLEC) and State-Trait Anxiety Inventory (STAI) were used. (1) There was no significant difference between grade groups (P = 0.49). (2) The level of girl's anxiety sensitivity was always higher than boy's (P < 0.001). (3) Data from multiple linear regression showed that the influential factors to the degree of anxiety sensitivity were: state of anxiety, trait anxiety, life events, sex, stress from learning, etc (standard coefficients of regression were 0.258, 0.163, 0.112, 0.093, 0.124, -0.096, 0.096). The major influential factors of anxiety sensitivity would include: sex, stress from learning, life events, interpersonal relationship, state of anxiety and trait anxiety.

  12. Computational Analysis of Candidate Disease Genes and Variants for Salt-Sensitive Hypertension in Indigenous Southern Africans

    PubMed Central

    Tiffin, Nicki; Meintjes, Ayton; Ramesar, Rajkumar; Bajic, Vladimir B.; Rayner, Brian

    2010-01-01

    Multiple factors underlie susceptibility to essential hypertension, including a significant genetic and ethnic component, and environmental effects. Blood pressure response of hypertensive individuals to salt is heterogeneous, but salt sensitivity appears more prevalent in people of indigenous African origin. The underlying genetics of salt-sensitive hypertension, however, are poorly understood. In this study, computational methods including text- and data-mining have been used to select and prioritize candidate aetiological genes for salt-sensitive hypertension. Additionally, we have compared allele frequencies and copy number variation for single nucleotide polymorphisms in candidate genes between indigenous Southern African and Caucasian populations, with the aim of identifying candidate genes with significant variability between the population groups: identifying genetic variability between population groups can exploit ethnic differences in disease prevalence to aid with prioritisation of good candidate genes. Our top-ranking candidate genes include parathyroid hormone precursor (PTH) and type-1angiotensin II receptor (AGTR1). We propose that the candidate genes identified in this study warrant further investigation as potential aetiological genes for salt-sensitive hypertension. PMID:20886000

  13. Integrated analysis of rice transcriptomic and metabolomic responses to elevated night temperatures identifies sensitivity- and tolerance-related profiles.

    PubMed

    Glaubitz, Ulrike; Li, Xia; Schaedel, Sandra; Erban, Alexander; Sulpice, Ronan; Kopka, Joachim; Hincha, Dirk K; Zuther, Ellen

    2017-01-01

    Transcript and metabolite profiling were performed on leaves from six rice cultivars under high night temperature (HNT) condition. Six genes were identified as central for HNT response encoding proteins involved in transcription regulation, signal transduction, protein-protein interactions, jasmonate response and the biosynthesis of secondary metabolites. Sensitive cultivars showed specific changes in transcript abundance including abiotic stress responses, changes of cell wall-related genes, of ABA signaling and secondary metabolism. Additionally, metabolite profiles revealed a highly activated TCA cycle under HNT and concomitantly increased levels in pathways branching off that could be corroborated by enzyme activity measurements. Integrated data analysis using clustering based on one-dimensional self-organizing maps identified two profiles highly correlated with HNT sensitivity. The sensitivity profile included genes of the functional bins abiotic stress, hormone metabolism, cell wall, signaling, redox state, transcription factors, secondary metabolites and defence genes. In the tolerance profile, similar bins were affected with slight differences in hormone metabolism and transcription factor responses. Metabolites of the two profiles revealed involvement of GABA signaling, thus providing a link to the TCA cycle status in sensitive cultivars and of myo-inositol as precursor for inositol phosphates linking jasmonate signaling to the HNT response specifically in tolerant cultivars. © 2016 John Wiley & Sons Ltd.

  14. Compliance and stress sensitivity of spur gear teeth

    NASA Technical Reports Server (NTRS)

    Cornell, R. W.

    1983-01-01

    The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.

  15. Sensitive and comprehensive analysis of O-glycosylation in biotherapeutics: a case study of novel erythropoiesis stimulating protein.

    PubMed

    Kim, Unyong; Oh, Myung Jin; Seo, Youngsuk; Jeon, Yinae; Eom, Joon-Ho; An, Hyun Joo

    2017-09-01

    Glycosylation of recombinant human erythropoietins (rhEPOs) is significantly associated with drug's quality and potency. Thus, comprehensive characterization of glycosylation is vital to assess the biotherapeutic quality and establish the equivalency of biosimilar rhEPOs. However, current glycan analysis mainly focuses on the N-glycans due to the absence of analytical tools to liberate O-glycans with high sensitivity. We developed selective and sensitive method to profile native O-glycans on rhEPOs. O-glycosylation on rhEPO including O-acetylation on a sialic acid was comprehensively characterized. Details such as O-glycan structure and O-acetyl-modification site were obtained from tandem MS. This method may be applied to QC and batch analysis of not only rhEPOs but also other biotherapeutics bearing multiple O-glycosylations.

  16. Breast Lesions: Diagnosis Using Diffusion Weighted Imaging at 1.5T and 3.0T-Systematic Review and Meta-analysis.

    PubMed

    Shi, Ruo-Yang; Yao, Qiu-Ying; Wu, Lian-Ming; Xu, Jian-Rong

    2018-06-01

    We compared the diagnostic performance of diffusion weighted imaging (DWI) acquired with 1.5T and 3.0T magnetic resonance (MR) units in differentiating malignant breast lesions from benign ones. A comprehensive search of the PubMed and Embase databases was performed for studies reported from January 1, 2000 to February 19, 2016. The quality of the included studies was assessed. Statistical analysis included pooling of diagnostic sensitivity and specificity and assessing data inhomogeneity and publication bias. A total of 61 studies were included after a full-text review. These included 4778 patients and 5205 breast lesions. The overall sensitivity and specificity were 90% (95% confidence interval [CI], 88%-92%) and 86% (95% CI, 82%-89%), respectively. The pooled diagnostic odds ratio was 53 (95% CI, 37-74). For breast cancer versus benign lesions, the area under the curve was 0.94 (95% CI, 0.92-0.96). For the 44 studies that used a 1.5T MR unit, the pooled sensitivity and specificity were 91% (95% CI, 89%-92%) and 86% (95% CI, 81%-90%), respectively. For the 17 studies that used a 3.0T MR unit, the pooled sensitivity and specificity were 88% (95% CI, 83%-91%) and 84% (95% CI, 0.78-0.89), respectively. Publication bias and significant heterogeneity were observed; however, no threshold was found among the 61 studies. No significant difference was found in the sensitivity or specificity between the subgroups. The results of the comparison between the subgroups that had used either a 1.5T or 3.0T MR unit suggest that the diagnostic accuracy for breast cancer compared with benign lesions is not significantly different. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Logistic Map for Cancellable Biometrics

    NASA Astrophysics Data System (ADS)

    Supriya, V. G., Dr; Manjunatha, Ramachandra, Dr

    2017-08-01

    This paper presents design and implementation of secured biometric template protection system by transforming the biometric template using binary chaotic signals and 3 different key streams to obtain another form of template and demonstrating its efficiency by the results and investigating on its security through analysis including, key space analysis, information entropy and key sensitivity analysis.

  18. Pyrotechnic hazards classification and evaluation program. Phase 3, segments 1-4: Investigation of sensitivity test methods and procedures for pyrotechnic hazards evaluation and classification, part A

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.

  19. Sensitivity and specificity of indocyanine green near-infrared fluorescence imaging in detection of metastatic lymph nodes in colorectal cancer: Systematic review and meta-analysis.

    PubMed

    Emile, Sameh H; Elfeki, Hossam; Shalaby, Mostafa; Sakr, Ahmad; Sileri, Pierpaolo; Laurberg, Søren; Wexner, Steven D

    2017-11-01

    This review aimed to determine the overall sensitivity and specificity of indocyanine green (ICG) near-infrared (NIR) fluorescence in sentinel lymph node (SLN) detection in Colorectal cancer (CRC). A systematic search in electronic databases was conducted. Twelve studies including 248 patients were reviewed. The median sensitivity, specificity, and accuracy rates were 73.7, 100, and 75.7. The pooled sensitivity and specificity rates were 71% and 84.6%. In conclusion, ICG-NIR fluorescence is a promising technique for detecting SLNs in CRC. © 2017 Wiley Periodicals, Inc.

  20. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  1. Sensitization to common allergens among patients with allergies in major Iranian cities: a systematic review and meta-analysis.

    PubMed

    Moghtaderi, Mozhgan; Hosseini Teshnizi, Saeed; Farjadian, Shirin

    2017-01-01

    Various allergens are implicated in the pathogenesis of allergic diseases in different regions. This study attempted to identify the most common allergens among patients with allergies based on the results of skin prick tests in different parts of Iran. Relevant studies conducted from 2000 to 2016 were identified from the MEDLINE database. Six common groups of allergen types, including animal, cockroach, food, fungus, house dust mite, and pollen were considered. Subgroup analysis was performed to determine the prevalence of each type of allergen. The Egger test was used to assess publication bias. We included 44 studies in this meta-analysis. The overall prevalence of positive skin test results for at least one allergen was estimated to be 59% in patients with allergies in various parts of Iran. The number of patients was 11,646 (56% male and 44% female), with a mean age of 17.46±11.12 years. The most common allergen sources were pollen (47.0%), mites (35.2%), and food (15.3%). The prevalence of sensitization to food and cockroach allergens among children was greater than among adults. Pollen is the most common allergen sensitization in cities of Iran with a warm and dry climate; however, sensitization to house dust mites is predominant in northern and southern coastal areas of Iran.

  2. Sensitization to common allergens among patients with allergies in major Iranian cities: a systematic review and meta-analysis

    PubMed Central

    Farjadian, Shirin

    2017-01-01

    Various allergens are implicated in the pathogenesis of allergic diseases in different regions. This study attempted to identify the most common allergens among patients with allergies based on the results of skin prick tests in different parts of Iran. Relevant studies conducted from 2000 to 2016 were identified from the MEDLINE database. Six common groups of allergen types, including animal, cockroach, food, fungus, house dust mite, and pollen were considered. Subgroup analysis was performed to determine the prevalence of each type of allergen. The Egger test was used to assess publication bias. We included 44 studies in this meta-analysis. The overall prevalence of positive skin test results for at least one allergen was estimated to be 59% in patients with allergies in various parts of Iran. The number of patients was 11,646 (56% male and 44% female), with a mean age of 17.46±11.12 years. The most common allergen sources were pollen (47.0%), mites (35.2%), and food (15.3%). The prevalence of sensitization to food and cockroach allergens among children was greater than among adults. Pollen is the most common allergen sensitization in cities of Iran with a warm and dry climate; however, sensitization to house dust mites is predominant in northern and southern coastal areas of Iran. PMID:28171712

  3. Biosensing Technologies for Mycobacterium tuberculosis Detection: Status and New Developments

    PubMed Central

    Zhou, Lixia; He, Xiaoxiao; He, Dinggeng; Wang, Kemin; Qin, Dilan

    2011-01-01

    Biosensing technologies promise to improve Mycobacterium tuberculosis (M. tuberculosis) detection and management in clinical diagnosis, food analysis, bioprocess, and environmental monitoring. A variety of portable, rapid, and sensitive biosensors with immediate “on-the-spot” interpretation have been developed for M. tuberculosis detection based on different biological elements recognition systems and basic signal transducer principles. Here, we present a synopsis of current developments of biosensing technologies for M. tuberculosis detection, which are classified on the basis of basic signal transducer principles, including piezoelectric quartz crystal biosensors, electrochemical biosensors, and magnetoelastic biosensors. Special attention is paid to the methods for improving the framework and analytical parameters of the biosensors, including sensitivity and analysis time as well as automation of analysis procedures. Challenges and perspectives of biosensing technologies development for M. tuberculosis detection are also discussed in the final part of this paper. PMID:21437177

  4. Additional EIPC Study Analysis: Interim Report on High Priority Topics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W

    Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.« less

  5. High Sensitivity Analysis of Nanoliter Volumes of Volatile and Nonvolatile Compounds using Matrix Assisted Ionization (MAI) Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Hoang, Khoa; Pophristic, Milan; Horan, Andrew J.; Johnston, Murray V.; McEwen, Charles N.

    2016-10-01

    First results are reported using a simple, fast, and reproducible matrix-assisted ionization (MAI) sample introduction method that provides substantial improvements relative to previously published MAI methods. The sensitivity of the new MAI methods, which requires no laser, high voltage, or nebulizing gas, is comparable to those reported for MALDI-TOF and n-ESI. High resolution full acquisition mass spectra having low chemical background are acquired from low nanoliters of solution using only a few femtomoles of analyte. The limit-of-detection for angiotensin II is less than 50 amol on an Orbitrap Exactive mass spectrometer. Analysis of peptides, including a bovine serum albumin digest, and drugs, including drugs in urine without a purification step, are reported using a 1 μL zero dead volume syringe in which only the analyte solution wetting the walls of the syringe needle is used in the analysis.

  6. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE PAGES

    Dai, Heng; Ye, Ming; Walker, Anthony P.; ...

    2017-03-28

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  7. A three-dimensional cohesive sediment transport model with data assimilation: Model development, sensitivity analysis and parameter estimation

    NASA Astrophysics Data System (ADS)

    Wang, Daosheng; Cao, Anzhou; Zhang, Jicai; Fan, Daidu; Liu, Yongzhi; Zhang, Yue

    2018-06-01

    Based on the theory of inverse problems, a three-dimensional sigma-coordinate cohesive sediment transport model with the adjoint data assimilation is developed. In this model, the physical processes of cohesive sediment transport, including deposition, erosion and advection-diffusion, are parameterized by corresponding model parameters. These parameters are usually poorly known and have traditionally been assigned empirically. By assimilating observations into the model, the model parameters can be estimated using the adjoint method; meanwhile, the data misfit between model results and observations can be decreased. The model developed in this work contains numerous parameters; therefore, it is necessary to investigate the parameter sensitivity of the model, which is assessed by calculating a relative sensitivity function and the gradient of the cost function with respect to each parameter. The results of parameter sensitivity analysis indicate that the model is sensitive to the initial conditions, inflow open boundary conditions, suspended sediment settling velocity and resuspension rate, while the model is insensitive to horizontal and vertical diffusivity coefficients. A detailed explanation of the pattern of sensitivity analysis is also given. In ideal twin experiments, constant parameters are estimated by assimilating 'pseudo' observations. The results show that the sensitive parameters are estimated more easily than the insensitive parameters. The conclusions of this work can provide guidance for the practical applications of this model to simulate sediment transport in the study area.

  8. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    A hydrological model consists of multiple process level submodels, and each submodel represents a process key to the operation of the simulated system. Global sensitivity analysis methods have been widely used to identify important processes for system model development and improvement. The existing methods of global sensitivity analysis only consider parametric uncertainty, and are not capable of handling model uncertainty caused by multiple process models that arise from competing hypotheses about one or more processes. To address this problem, this study develops a new method to probe model output sensitivity to competing process models by integrating model averaging methods withmore » variance-based global sensitivity analysis. A process sensitivity index is derived as a single summary measure of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and their parameters. Here, for demonstration, the new index is used to assign importance to the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that convert precipitation to recharge, and the geology process is simulated by two models of hydraulic conductivity. Each process model has its own random parameters. Finally, the new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  9. Comparison of Fixed-Item and Response-Sensitive Versions of an Online Tutorial

    ERIC Educational Resources Information Center

    Grant, Lyle K.; Courtoreille, Marni

    2007-01-01

    This study is a comparison of 2 versions of an Internet-based tutorial that teaches the behavior-analysis concept of positive reinforcement. A fixed-item group of students studied a version of the tutorial that included 14 interactive examples and nonexamples of the concept. A response-sensitive group of students studied a different version of the…

  10. Recent approaches in sensitive enantioseparations by CE.

    PubMed

    Sánchez-Hernández, Laura; Castro-Puyana, María; Marina, María Luisa; Crego, Antonio L

    2012-01-01

    The latest strategies and instrumental improvements for enhancing the detection sensitivity in chiral analysis by CE are reviewed in this work. Following the previous reviews by García-Ruiz et al. (Electrophoresis 2006, 27, 195-212) and Sánchez-Hernández et al. (Electrophoresis 2008, 29, 237-251; Electrophoresis 2010, 31, 28-43), this review includes those papers that were published during the period from June 2009 to May 2011. These works describe the use of offline and online sample treatment techniques, online sample preconcentration techniques based on electrophoretic principles, and alternative detection systems to UV-Vis to increase the detection sensitivity. The application of the above-mentioned strategies, either alone or combined, to improve the sensitivity in the enantiomeric analysis of a broad range of samples, such as pharmaceutical, biological, food and environmental samples, enables to decrease the limits of detection up to 10⁻¹² M. The use of microchips to achieve sensitive chiral separations is also discussed. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Solid state SPS microwave generation and transmission study. Volume 1: Phase 2

    NASA Technical Reports Server (NTRS)

    Maynard, O. E.

    1980-01-01

    The solid state sandwich concept for Solar Power Station (SPS) was investigated. The design effort concentrated on the spacetenna, but did include some system analysis for parametric comparison reasons. The study specifically included definition and math modeling of basic solid state microwave devices, an initial conceptual subsystems and system design, sidelobe control and system selection, an assessment of selected system concept and parametric solid state microwave power transmission system data relevant to the SPS concept. Although device efficiency was not a goal, the sensitivities to design of this efficiency were parametrically treated. Sidelobe control consisted of various single step tapers, multistep tapers, and Gaussian tapers. A preliminary assessment of a hybrid concept using tubes and solid state is also included. There is a considerable amount of thermal analysis provided with emphasis on sensitivities to waste heat radiator form factor, emissivity, absorptivity, amplifier efficiency, material and junction temperature.

  12. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  13. Sensitivity of GC-EI/MS, GC-EI/MS/MS, LC-ESI/MS/MS, LC-Ag(+) CIS/MS/MS, and GC-ESI/MS/MS for analysis of anabolic steroids in doping control.

    PubMed

    Cha, Eunju; Kim, Sohee; Kim, Ho Jun; Lee, Kang Mi; Kim, Ki Hun; Kwon, Oh-Seung; Lee, Jaeick

    2015-01-01

    This study compared the sensitivity of various separation and ionization methods, including gas chromatography with an electron ionization source (GC-EI), liquid chromatography with an electrospray ionization source (LC-ESI), and liquid chromatography with a silver ion coordination ion spray source (LC-Ag(+) CIS), coupled to a mass spectrometer (MS) for steroid analysis. Chromatographic conditions, mass spectrometric transitions, and ion source parameters were optimized. The majority of steroids in GC-EI/MS/MS and LC-Ag(+) CIS/MS/MS analysis showed higher sensitivities than those obtained with other analytical methods. The limits of detection (LODs) of 65 steroids by GC-EI/MS/MS, 68 steroids by LC-Ag(+) CIS/MS/MS, 56 steroids by GC-EI/MS, 54 steroids by LC-ESI/MS/MS, and 27 steroids by GC-ESI/MS/MS were below cut-off value of 2.0 ng/mL. LODs of steroids that formed protonated ions in LC-ESI/MS/MS analysis were all lower than the cut-off value. Several steroids such as unconjugated C3-hydroxyl with C17-hydroxyl structure showed higher sensitivities in GC-EI/MS/MS analysis relative to those obtained using the LC-based methods. The steroids containing 4, 9, 11-triene structures showed relatively poor sensitivities in GC-EI/MS and GC-ESI/MS/MS analysis. The results of this study provide information that may be useful for selecting suitable analytical methods for confirmatory analysis of steroids. Copyright © 2015 John Wiley & Sons, Ltd.

  14. Proteomic differential display analysis for TS-1-resistant and -sensitive pancreatic cancer cells using two-dimensional gel electrophoresis and mass spectrometry.

    PubMed

    Yoshida, Kanako; Kuramitsu, Yasuhiro; Murakami, Kohei; Ryozawa, Shomei; Taba, Kumiko; Kaino, Seiji; Zhang, Xiulian; Sakaida, Isao; Nakamura, Kazuyuki

    2011-06-01

    TS-1 is an oral anticancer agent containing two biochemical modulators for 5-fluorouracil (5-FU) and tegafur (FT), a metabolically activated prodrug of 5-FU. TS-1 has been recognized as an effective anticancer drug using standard therapies for patients with advanced pancreatic cancer along with gemcitabine. However, a high level of inherent and acquired tumor resistance to TS-1 induces difficulty in the treatment. To identify proteins linked to the TS-1-resistance of pancreatic cancer, we profiled protein expression levels in samples of TS-1-resistant and -sensitive pancreatic cancer cell lines by using two-dimensional gel electrophoresis (2-DE) and liquid chromatography-tandem mass spectrometry (LC-MS/MS). The cytotoxicity of a 5-FU/5-chloro-2,4-dihydroxypyridine (CDHP) combination towards pancreatic cancer cell lines was evaluated by MTS assay. Panc-1, BxPC-3, MiaPaCa-2 and PK59 showed high sensitivity to the 5-FU/CDHP combination (TS-1-sensitive), whereas PK45p and KLM-1 were much less sensitive (TS-1-resistant). Proteomic analysis showed that eleven spots, including T-complex protein 1 subunit beta, ribonuclease inhibitor, elongation factor 1-delta, peroxiredoxin-2 and superoxide dismutase (Cu-Zn), appeared to be down-regulated, and 29 spots, including hypoxia up-regulated protein 1, lamin-A/C, endoplasmin, fascin and annexin A1, appeared to be up-regulated in TS-1-resistant cells compared with -sensitive cells. These results suggest that the identified proteins showing different expression between TS-1-sensitive and -resistant pancreatic cancer cells possibly relate to TS-1-sensitivity. These findings could be useful to overcome the TS-1-resistance of pancreatic cancer cells.

  15. Using sensitivity analysis in model calibration efforts

    USGS Publications Warehouse

    Tiedeman, Claire; Hill, Mary C.

    2003-01-01

    In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.

  16. Validation of a next-generation sequencing assay for clinical molecular oncology.

    PubMed

    Cottrell, Catherine E; Al-Kateb, Hussam; Bredemeyer, Andrew J; Duncavage, Eric J; Spencer, David H; Abel, Haley J; Lockwood, Christina M; Hagemann, Ian S; O'Guin, Stephanie M; Burcea, Lauren C; Sawyer, Christopher S; Oschwald, Dayna M; Stratman, Jennifer L; Sher, Dorie A; Johnson, Mark R; Brown, Justin T; Cliften, Paul F; George, Bijoy; McIntosh, Leslie D; Shrivastava, Savita; Nguyen, Tudung T; Payton, Jacqueline E; Watson, Mark A; Crosby, Seth D; Head, Richard D; Mitra, Robi D; Nagarajan, Rakesh; Kulkarni, Shashikant; Seibert, Karen; Virgin, Herbert W; Milbrandt, Jeffrey; Pfeifer, John D

    2014-01-01

    Currently, oncology testing includes molecular studies and cytogenetic analysis to detect genetic aberrations of clinical significance. Next-generation sequencing (NGS) allows rapid analysis of multiple genes for clinically actionable somatic variants. The WUCaMP assay uses targeted capture for NGS analysis of 25 cancer-associated genes to detect mutations at actionable loci. We present clinical validation of the assay and a detailed framework for design and validation of similar clinical assays. Deep sequencing of 78 tumor specimens (≥ 1000× average unique coverage across the capture region) achieved high sensitivity for detecting somatic variants at low allele fraction (AF). Validation revealed sensitivities and specificities of 100% for detection of single-nucleotide variants (SNVs) within coding regions, compared with SNP array sequence data (95% CI = 83.4-100.0 for sensitivity and 94.2-100.0 for specificity) or whole-genome sequencing (95% CI = 89.1-100.0 for sensitivity and 99.9-100.0 for specificity) of HapMap samples. Sensitivity for detecting variants at an observed 10% AF was 100% (95% CI = 93.2-100.0) in HapMap mixes. Analysis of 15 masked specimens harboring clinically reported variants yielded concordant calls for 13/13 variants at AF of ≥ 15%. The WUCaMP assay is a robust and sensitive method to detect somatic variants of clinical significance in molecular oncology laboratories, with reduced time and cost of genetic analysis allowing for strategic patient management. Copyright © 2014 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  17. Seizure semiology identifies patients with bilateral temporal lobe epilepsy.

    PubMed

    Loesch, Anna Mira; Feddersen, Berend; Tezer, F Irsel; Hartl, Elisabeth; Rémi, Jan; Vollmar, Christian; Noachtar, Soheyl

    2015-01-01

    Laterality in temporal lobe epilepsy is usually defined by EEG and imaging results. We investigated whether the analysis of seizure semiology including lateralizing seizure phenomena identifies bilateral independent temporal lobe seizure onset. We investigated the seizure semiology in 17 patients in whom invasive EEG-video-monitoring documented bilateral temporal seizure onset. The results were compared to 20 left and 20 right consecutive temporal lobe epilepsy (TLE) patients who were seizure free after anterior temporal lobe resection. The seizure semiology was analyzed using the semiological seizure classification with particular emphasis on the sequence of seizure phenomena over time and lateralizing seizure phenomena. Statistical analysis included chi-square test or Fisher's exact test. Bitemporal lobe epilepsy patients had more frequently different seizure semiology (100% vs. 40%; p<0.001) and significantly more often lateralizing seizure phenomena pointing to bilateral seizure onset compared to patients with unilateral TLE (67% vs. 11%; p<0.001). The sensitivity of identical vs. different seizure semiology for the identification of bilateral TLE was high (100%) with a specificity of 60%. Lateralizing seizure phenomena had a low sensitivity (59%) but a high specificity (89%). The combination of lateralizing seizure phenomena and different seizure semiology showed a high specificity (94%) but a low sensitivity (59%). The analysis of seizure semiology including lateralizing seizure phenomena adds important clinical information to identify patients with bilateral TLE. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Bacteriophage-based assays for the rapid detection of rifampicin resistance in Mycobacterium tuberculosis: a meta-analysis.

    PubMed

    Pai, Madhukar; Kalantri, Shriprakash; Pascopella, Lisa; Riley, Lee W; Reingold, Arthur L

    2005-10-01

    To summarize, using meta-analysis, the accuracy of bacteriophage-based assays for the detection of rifampicin resistance in Mycobacterium tuberculosis. By searching multiple databases and sources we identified a total of 21 studies eligible for meta-analysis. Of these, 14 studies used phage amplification assays (including eight studies on the commercial FASTPlaque-TB kits), and seven used luciferase reporter phage (LRP) assays. Sensitivity, specificity, and agreement between phage assay and reference standard (e.g. agar proportion method or BACTEC 460) results were the main outcomes of interest. When performed on culture isolates (N=19 studies), phage assays appear to have relatively high sensitivity and specificity. Eleven of 19 (58%) studies reported sensitivity and specificity estimates > or =95%, and 13 of 19 (68%) studies reported > or =95% agreement with reference standard results. Specificity estimates were slightly lower and more variable than sensitivity; 5 of 19 (26%) studies reported specificity <90%. Only two studies performed phage assays directly on sputum specimens; although one study reported sensitivity and specificity of 100 and 99%, respectively, another reported sensitivity of 86% and specificity of 73%. Current evidence is largely restricted to the use of phage assays for the detection of rifampicin resistance in culture isolates. When used on culture isolates, these assays appear to have high sensitivity, but variable and slightly lower specificity. In contrast, evidence is lacking on the accuracy of these assays when they are directly applied to sputum specimens. If phage-based assays can be directly used on clinical specimens and if they are shown to have high accuracy, they have the potential to improve the diagnosis of MDR-TB. However, before phage assays can be successfully used in routine practice, several concerns have to be addressed, including unexplained false positives in some studies, potential for contamination and indeterminate results.

  19. Comparison between evaporative light scattering detection and charged aerosol detection for the analysis of saikosaponins.

    PubMed

    Eom, Han Young; Park, So-Young; Kim, Min Kyung; Suh, Joon Hyuk; Yeom, Hyesun; Min, Jung Won; Kim, Unyong; Lee, Jeongmi; Youm, Jeong-Rok; Han, Sang Beom

    2010-06-25

    Saikosaponins are triterpene saponins derived from the roots of Bupleurum falcatum L. (Umbelliferae), which has been traditionally used to treat fever, inflammation, liver diseases, and nephritis. It is difficult to analyze saikosaponins using HPLC-UV due to the lack of chromophores. Therefore, evaporative light scattering detection (ELSD) is used as a valuable alternative to UV detection. More recently, a charged aerosol detection (CAD) method has been developed to improve the sensitivity and reproducibility of ELSD. In this study, we compared CAD and ELSD methods in the simultaneous analysis of 10 saikosaponins, including saikosaponins-A, -B(1), -B(2), -B(3), -B(4), -C, -D, -G, -H and -I. A mixture of the 10 saikosaponins was injected into the Ascentis Express C18 column (100 mm x 4.6 mm, 2.7 microm) with gradient elution and detection with CAD and ELSD by splitting. We examined various factors that could affect the sensitivity of the detectors including various concentrations of additives, pH and flow rate of the mobile phase, purity of nitrogen gas and the CAD range. The sensitivity was determined based on the signal-to-noise ratio. The best sensitivity for CAD was achieved with 0.1 mM ammonium acetate at pH 4.0 in the mobile phase with a flow rate of 1.0 mL/min, and the CAD range at 100 pA, whereas that for ELSD was achieved with 0.01% acetic acid in the mobile phase with a flow rate at 0.8 mL/min. The purity of the nitrogen gas had only minor effects on the sensitivities of both detectors. Finally, the sensitivity for CAD was two to six times better than that of ELSD. Taken together, these results suggest that CAD provides a more sensitive analysis of the 10 saikosaponins than does ELSD. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Micropollutants throughout an integrated urban drainage model: Sensitivity and uncertainty analysis

    NASA Astrophysics Data System (ADS)

    Mannina, Giorgio; Cosenza, Alida; Viviani, Gaspare

    2017-11-01

    The paper presents the sensitivity and uncertainty analysis of an integrated urban drainage model which includes micropollutants. Specifically, a bespoke integrated model developed in previous studies has been modified in order to include the micropollutant assessment (namely, sulfamethoxazole - SMX). The model takes into account also the interactions between the three components of the system: sewer system (SS), wastewater treatment plant (WWTP) and receiving water body (RWB). The analysis has been applied to an experimental catchment nearby Palermo (Italy): the Nocella catchment. Overall, five scenarios, each characterized by different uncertainty combinations of sub-systems (i.e., SS, WWTP and RWB), have been considered applying, for the sensitivity analysis, the Extended-FAST method in order to select the key factors affecting the RWB quality and to design a reliable/useful experimental campaign. Results have demonstrated that sensitivity analysis is a powerful tool for increasing operator confidence in the modelling results. The approach adopted here can be used for blocking some non-identifiable factors, thus wisely modifying the structure of the model and reducing the related uncertainty. The model factors related to the SS have been found to be the most relevant factors affecting the SMX modeling in the RWB when all model factors (scenario 1) or model factors of SS (scenarios 2 and 3) are varied. If the only factors related to the WWTP are changed (scenarios 4 and 5), the SMX concentration in the RWB is mainly influenced (till to 95% influence of the total variance for SSMX,max) by the aerobic sorption coefficient. A progressive uncertainty reduction from the upstream to downstream was found for the soluble fraction of SMX in the RWB.

  1. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)-A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes.

    PubMed

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare . However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes.

  2. Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq)—A Method for High-Throughput Analysis of Differentially Methylated CCGG Sites in Plants with Large Genomes

    PubMed Central

    Chwialkowska, Karolina; Korotko, Urszula; Kosinska, Joanna; Szarejko, Iwona; Kwasniewski, Miroslaw

    2017-01-01

    Epigenetic mechanisms, including histone modifications and DNA methylation, mutually regulate chromatin structure, maintain genome integrity, and affect gene expression and transposon mobility. Variations in DNA methylation within plant populations, as well as methylation in response to internal and external factors, are of increasing interest, especially in the crop research field. Methylation Sensitive Amplification Polymorphism (MSAP) is one of the most commonly used methods for assessing DNA methylation changes in plants. This method involves gel-based visualization of PCR fragments from selectively amplified DNA that are cleaved using methylation-sensitive restriction enzymes. In this study, we developed and validated a new method based on the conventional MSAP approach called Methylation Sensitive Amplification Polymorphism Sequencing (MSAP-Seq). We improved the MSAP-based approach by replacing the conventional separation of amplicons on polyacrylamide gels with direct, high-throughput sequencing using Next Generation Sequencing (NGS) and automated data analysis. MSAP-Seq allows for global sequence-based identification of changes in DNA methylation. This technique was validated in Hordeum vulgare. However, MSAP-Seq can be straightforwardly implemented in different plant species, including crops with large, complex and highly repetitive genomes. The incorporation of high-throughput sequencing into MSAP-Seq enables parallel and direct analysis of DNA methylation in hundreds of thousands of sites across the genome. MSAP-Seq provides direct genomic localization of changes and enables quantitative evaluation. We have shown that the MSAP-Seq method specifically targets gene-containing regions and that a single analysis can cover three-quarters of all genes in large genomes. Moreover, MSAP-Seq's simplicity, cost effectiveness, and high-multiplexing capability make this method highly affordable. Therefore, MSAP-Seq can be used for DNA methylation analysis in crop plants with large and complex genomes. PMID:29250096

  3. [The role of endotracheal aspirate culture in the diagnosis of ventilator-associated pneumonia: a meta analysis].

    PubMed

    Wang, Fei; He, Bei

    2013-01-01

    To investigate the role of endotracheal aspirate (EA) culture in the diagnosis and antibiotic management in ventilator-associated pneumonia (VAP). We searched CNKI, Wanfang, PUBMED and EMBASE databases published from January 1990 to December 2011, to find relevant literatures on VAP microbiological diagnostic techniques including EA and bronchoalveolar lavage (BALF). The following key words were used: ventilator associated pneumonia, diagnosis and adult. Meta-analysis was performed and the sensitivity and specificity of EA on VAP diagnosis were calculated. Our literature search identified 1665 potential articles, 8 of which fulfilled our selection criteria including 561 patients with paired cultures. Using BALF quantitative culture as reference standard, the sensitivity and specificity of EA were 72% and 71%. When considering quantitative culture of EA only, the sensitivity and specificity improved to 90% and 65%, while the positive and the negative predictive values were 68% and 89% respectively. However, the sensitivity and specificity of semi-quantitative culture of EA were only 50% and 80%, with a positive predictive value of 77% and a negative predictive value of 58% respectively. EA culture had relatively poor sensitivity and specificity, although quantitative culture of EA only could improve the sensitivity. Initiating therapy on the basis of EA quantitative culture may still result in excessive antibiotic usage. Our data suggested that EA could provide some information for clinical decision but could not replace the role of BALF quantitative culture in VAP diagnosis.

  4. Evaluating the Minimal Specimens From Endoscopic Ultrasound-Guided Fine-Needle Aspiration in Pancreatic Masses

    PubMed Central

    Park, Joo Kyung; Kang, Ki Joo; Oh, Cho Rong; Lee, Jong Kyun; Lee, Kyu Taek; Jang, Kee Taek; Park, Sang-Mo; Lee, Kwang Hyuck

    2016-01-01

    Abstract Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has become one of the most useful diagnostic modalities for the diagnosis of pancreatic mass. The aim of this study was to investigate the role of analyzing the minimal specimens obtained by EUS-FNA for the diagnosis of solid masses of pancreas. This study consisted of retrospective and prospective analyses. The retrospective study was performed on 116 patients who underwent EUS-FNA of solid masses for cytological smear, histological analysis, and combined analysis including immunohistochemical (IHC) staining. In the prospective study, 79 patients were enrolled to evaluate the quality and accuracy of EUS-FNA histological analysis and feasibility of IHC staining. The final diagnoses of all patients included pancreatic cancer (n = 126), nonpancreatic cancer (n = 21), other neoplasm (n = 27), and benign lesions (n = 21). In our retrospective study, the combined analysis was more sensitive than cytological analysis alone (P < 0.01). The overall sensitivity of cytology, histology, and combined analysis was 69.8%, 67.2%, and 81.8%, respectively. In the prospective analysis, 64.2% of all punctures were helpful for determining the diagnosis and 40.7% provided sufficient tissue for IHC staining. Histological analysis was helpful for diagnosis in 74.7% of patients. IHC staining was necessary for a definite diagnosis in 11.4% of patients, especially in the cases of nonmalignant pancreatic mass. Histological analysis and IHC study of EUS-FNA specimens was useful for the accurate diagnosis of pancreatic and peripancreatic lesions. Combined analysis showed significantly higher sensitivity than cytology alone because IHC staining was helpful for a diagnosis in some patients. PMID:27227937

  5. The Relationship of Mean Platelet Volume/Platelet Distribution Width and Duodenal Ulcer Perforation.

    PubMed

    Fan, Zhe; Zhuang, Chengjun

    2017-03-01

    Duodenal ulcer perforation (DUP) is a severe acute abdominal disease. Mean platelet volume (MPV) and platelet distribution width (PDW) are two platelet parameters, participating in many inflammatory processes. This study aims to investigate the relation of MPV/PDW and DUP. A total of 165 patients were studied retrospectively, including 21 females and 144 males. The study included two groups: 87 normal patients (control group) and 78 duodenal ulcer perforation patients (DUP group). Routine blood parameters were collected for analysis including white blood cell count (WBC), neutrophil ratio (NR), platelet count (PLT), MPV and PDW. Receiver operating curve (ROC) analysis was applied to evaluate the parameters' sensitivity. No significant differences were observed between the control group and DUP group in age and gender. WBC, NR and PDW were significantly increased in the DUP group ( P <0.001, respectively); PLT and MPV were significantly decreased in the DUP group ( P <0.001, respectively) compared to controls. MPV had the high sensitivity. Our results suggested a potential association between MPV/PDW and disease activity in DUP patients, and high sensitivity of MPV. © 2017 by the Association of Clinical Scientists, Inc.

  6. Global sensitivity analysis of a filtration model for submerged anaerobic membrane bioreactors (AnMBR).

    PubMed

    Robles, A; Ruano, M V; Ribes, J; Seco, A; Ferrer, J

    2014-04-01

    The results of a global sensitivity analysis of a filtration model for submerged anaerobic MBRs (AnMBRs) are assessed in this paper. This study aimed to (1) identify the less- (or non-) influential factors of the model in order to facilitate model calibration and (2) validate the modelling approach (i.e. to determine the need for each of the proposed factors to be included in the model). The sensitivity analysis was conducted using a revised version of the Morris screening method. The dynamic simulations were conducted using long-term data obtained from an AnMBR plant fitted with industrial-scale hollow-fibre membranes. Of the 14 factors in the model, six were identified as influential, i.e. those calibrated using off-line protocols. A dynamic calibration (based on optimisation algorithms) of these influential factors was conducted. The resulting estimated model factors accurately predicted membrane performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Sensitivity Analysis of OECD Benchmark Tests in BISON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Gamble, Kyle; Schmidt, Rodney C.

    2015-09-01

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on sensitivity analysis of a fuels performance benchmark problem. The benchmark problem was defined by the Uncertainty Analysis in Modeling working group of the Nuclear Science Committee, part of the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD ). The benchmark problem involv ed steady - state behavior of a fuel pin in a Pressurized Water Reactor (PWR). The problem was created in the BISON Fuels Performance code. Dakota was used to generate and analyze 300 samples of 17 input parameters defining coremore » boundary conditions, manuf acturing tolerances , and fuel properties. There were 24 responses of interest, including fuel centerline temperatures at a variety of locations and burnup levels, fission gas released, axial elongation of the fuel pin, etc. Pearson and Spearman correlatio n coefficients and Sobol' variance - based indices were used to perform the sensitivity analysis. This report summarizes the process and presents results from this study.« less

  8. Surface flaw reliability analysis of ceramic components with the SCARE finite element postprocessor program

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, John P.; Nemeth, Noel N.

    1987-01-01

    The SCARE (Structural Ceramics Analysis and Reliability Evaluation) computer program on statistical fast fracture reliability analysis with quadratic elements for volume distributed imperfections is enhanced to include the use of linear finite elements and the capability of designing against concurrent surface flaw induced ceramic component failure. The SCARE code is presently coupled as a postprocessor to the MSC/NASTRAN general purpose, finite element analysis program. The improved version now includes the Weibull and Batdorf statistical failure theories for both surface and volume flaw based reliability analysis. The program uses the two-parameter Weibull fracture strength cumulative failure probability distribution model with the principle of independent action for poly-axial stress states, and Batdorf's shear-sensitive as well as shear-insensitive statistical theories. The shear-sensitive surface crack configurations include the Griffith crack and Griffith notch geometries, using the total critical coplanar strain energy release rate criterion to predict mixed-mode fracture. Weibull material parameters based on both surface and volume flaw induced fracture can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and grouped fracture data. The statistical fast fracture theories for surface flaw induced failure, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  9. High-performance liquid chromatography with fluorescence detection for the rapid analysis of pheophytins and pyropheophytins in virgin olive oil.

    PubMed

    Li, Xueqi; Woodman, Michael; Wang, Selina C

    2015-08-01

    Pheophytins and pyropheophytin are degradation products of chlorophyll pigments, and their ratios can be used as a sensitive indicator of stress during the manufacturing and storage of olive oil. They increase over time depending on the storage condition and if the oil is exposed to heat treatments during the refining process. The traditional analysis method includes solvent- and time-consuming steps of solid-phase extraction followed by analysis by high-performance liquid chromatography with ultraviolet detection. We developed an improved dilute/fluorescence method where multi-step sample preparation was replaced by a simple isopropanol dilution before the high-performance liquid chromatography injection. A quaternary solvent gradient method was used to include a fourth strong solvent wash on a quaternary gradient pump, which avoided the need to premix any solvents and greatly reduced the oil residues on the column from previous analysis. This new method not only reduces analysis cost and time but shows reliability, repeatability, and improved sensitivity, especially important for low-level samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Analysis of Publically Available Skin Sensitization Data from REACH Registrations 2008–2014

    PubMed Central

    Luechtefeld, Thomas; Maertens, Alexandra; Russo, Daniel P.; Rovida, Costanza; Zhu, Hao; Hartung, Thomas

    2017-01-01

    Summary The public data on skin sensitization from REACH registrations already included 19,111 studies on skin sensitization in December 2014, making it the largest repository of such data so far (1,470 substances with mouse LLNA, 2,787 with GPMT, 762 with both in vivo and in vitro and 139 with only in vitro data). 21% were classified as sensitizers. The extracted skin sensitization data was analyzed to identify relationships in skin sensitization guidelines, visualize structural relationships of sensitizers, and build models to predict sensitization. A chemical with molecular weight > 500 Da is generally considered non-sensitizing owing to low bioavailability, but 49 sensitizing chemicals with a molecular weight > 500 Da were found. A chemical similarity map was produced using PubChem’s 2D Tanimoto similarity metric and Gephi force layout visualization. Nine clusters of chemicals were identified by Blondel’s module recognition algorithm revealing wide module-dependent variation. Approximately 31% of mapped chemicals are Michael’s acceptors but alone this does not imply skin sensitization. A simple sensitization model using molecular weight and five ToxTree structural alerts showed a balanced accuracy of 65.8% (specificity 80.4%, sensitivity 51.4%), demonstrating that structural alerts have information value. A simple variant of k-nearest neighbors outperformed the ToxTree approach even at 75% similarity threshold (82% balanced accuracy at 0.95 threshold). At higher thresholds, the balanced accuracy increased. Lower similarity thresholds decrease sensitivity faster than specificity. This analysis scopes the landscape of chemical skin sensitization, demonstrating the value of large public datasets for health hazard prediction. PMID:26863411

  11. Microbial Diagnostic Microarrays for the Detection and Typing of Food- and Water-Borne (Bacterial) Pathogens

    PubMed Central

    Kostić, Tanja; Sessitsch, Angela

    2011-01-01

    Reliable and sensitive pathogen detection in clinical and environmental (including food and water) samples is of greatest importance for public health. Standard microbiological methods have several limitations and improved alternatives are needed. Most important requirements for reliable analysis include: (i) specificity; (ii) sensitivity; (iii) multiplexing potential; (iv) robustness; (v) speed; (vi) automation potential; and (vii) low cost. Microarray technology can, through its very nature, fulfill many of these requirements directly and the remaining challenges have been tackled. In this review, we attempt to compare performance characteristics of the microbial diagnostic microarrays developed for the detection and typing of food and water pathogens, and discuss limitations, points still to be addressed and issues specific for the analysis of food, water and environmental samples. PMID:27605332

  12. Can Automated Imaging for Optic Disc and Retinal Nerve Fiber Layer Analysis Aid Glaucoma Detection?

    PubMed

    Banister, Katie; Boachie, Charles; Bourne, Rupert; Cook, Jonathan; Burr, Jennifer M; Ramsay, Craig; Garway-Heath, David; Gray, Joanne; McMeekin, Peter; Hernández, Rodolfo; Azuara-Blanco, Augusto

    2016-05-01

    To compare the diagnostic performance of automated imaging for glaucoma. Prospective, direct comparison study. Adults with suspected glaucoma or ocular hypertension referred to hospital eye services in the United Kingdom. We evaluated 4 automated imaging test algorithms: the Heidelberg Retinal Tomography (HRT; Heidelberg Engineering, Heidelberg, Germany) glaucoma probability score (GPS), the HRT Moorfields regression analysis (MRA), scanning laser polarimetry (GDx enhanced corneal compensation; Glaucoma Diagnostics (GDx), Carl Zeiss Meditec, Dublin, CA) nerve fiber indicator (NFI), and Spectralis optical coherence tomography (OCT; Heidelberg Engineering) retinal nerve fiber layer (RNFL) classification. We defined abnormal tests as an automated classification of outside normal limits for HRT and OCT or NFI ≥ 56 (GDx). We conducted a sensitivity analysis, using borderline abnormal image classifications. The reference standard was clinical diagnosis by a masked glaucoma expert including standardized clinical assessment and automated perimetry. We analyzed 1 eye per patient (the one with more advanced disease). We also evaluated the performance according to severity and using a combination of 2 technologies. Sensitivity and specificity, likelihood ratios, diagnostic, odds ratio, and proportion of indeterminate tests. We recruited 955 participants, and 943 were included in the analysis. The average age was 60.5 years (standard deviation, 13.8 years); 51.1% were women. Glaucoma was diagnosed in at least 1 eye in 16.8%; 32% of participants had no glaucoma-related findings. The HRT MRA had the highest sensitivity (87.0%; 95% confidence interval [CI], 80.2%-92.1%), but lowest specificity (63.9%; 95% CI, 60.2%-67.4%); GDx had the lowest sensitivity (35.1%; 95% CI, 27.0%-43.8%), but the highest specificity (97.2%; 95% CI, 95.6%-98.3%). The HRT GPS sensitivity was 81.5% (95% CI, 73.9%-87.6%), and specificity was 67.7% (95% CI, 64.2%-71.2%); OCT sensitivity was 76.9% (95% CI, 69.2%-83.4%), and specificity was 78.5% (95% CI, 75.4%-81.4%). Including only eyes with severe glaucoma, sensitivity increased: HRT MRA, HRT GPS, and OCT would miss 5% of eyes, and GDx would miss 21% of eyes. A combination of 2 different tests did not improve the accuracy substantially. Automated imaging technologies can aid clinicians in diagnosing glaucoma, but may not replace current strategies because they can miss some cases of severe glaucoma. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  13. Behavioral profiles of feline breeds in Japan.

    PubMed

    Takeuchi, Yukari; Mori, Yuji

    2009-08-01

    To clarify the behavioral profiles of 9 feline purebreds, 2 Persian subbreeds and the Japanese domestic cat, a questionnaire survey was distributed to 67 small-animal veterinarians. We found significant differences among breeds in all behavioral traits examined except for "inappropriate elimination". In addition, sexual differences were observed in certain behaviors, including "aggression toward cats", "general activity", "novelty-seeking", and "excitability". These behaviors were more common in males than females, whereas "nervousness" and "inappropriate elimination" were rated higher in females. When all breeds were categorized into four groups on the basis of a cluster analysis using the scores of two behavioral trait factors called "aggressiveness/sensitivity" and "vivaciousness", the group including Abyssinian, Russian Blue, Somali, Siamese, and Chinchilla breeds showed high aggressiveness/sensitivity and low vivaciousness. In contrast, the group including the American Shorthair and Japanese domestic cat displayed low aggressiveness/sensitivity and high vivaciousness, and the Himalayan and Persian group showed mild aggressiveness/sensitivity and very low vivaciousness. Finally, the group containing Maine Coon, Ragdoll, and Scottish Fold breeds displayed very low aggressiveness/sensitivity and low vivaciousness. The present results demonstrate that some feline behavioral traits vary by breed and/or sex.

  14. Resonance ionization for analytical spectroscopy

    DOEpatents

    Hurst, George S.; Payne, Marvin G.; Wagner, Edward B.

    1976-01-01

    This invention relates to a method for the sensitive and selective analysis of an atomic or molecular component of a gas. According to this method, the desired neutral component is ionized by one or more resonance photon absorptions, and the resultant ions are measured in a sensitive counter. Numerous energy pathways are described for accomplishing the ionization including the use of one or two tunable pulsed dye lasers.

  15. An Analysis of the Relationship between Educational Aspiration, Cross-Cultural Sensitivity, and Field of Study of Chinese Student-Teachers at the University of Macau.

    ERIC Educational Resources Information Center

    Koo, Ramsey D.

    This study examined the relationship among educational aspiration, cross-cultural sensitivity, and field of study of 196 Chinese student teachers enrolled in the Faculty of Education for Fall 1994 and Spring 1995 at the University of Macau (China). The study investigated other patterns of cross-cultural experience and activities, including average…

  16. A Transactional Analysis of the Relation between Maternal Sensitivity and Child Vagal Regulation

    ERIC Educational Resources Information Center

    Perry, Nicole B.; Mackler, Jennifer S.; Calkins, Susan D.; Keane, Susan P.

    2014-01-01

    A transactional model examining the longitudinal association between vagal regulation (as indexed by vagal withdrawal) and maternal sensitivity from age 2.5 to age 5.5 was assessed. The sample included 356 children (171 male, 185 female) and their mothers who participated in a laboratory visit at age 2.5, 4.5, and 5.5. Cardiac vagal tone was…

  17. LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    2000-01-01

    A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).

  18. Ignoring correlation in uncertainty and sensitivity analysis in life cycle assessment: what is the risk?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC

    Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less

  19. Ultrasound for Distal Forearm Fracture: A Systematic Review and Diagnostic Meta-Analysis

    PubMed Central

    Douma-den Hamer, Djoke; Blanker, Marco H.; Edens, Mireille A.; Buijteweg, Lonneke N.; Boomsma, Martijn F.; van Helden, Sven H.; Mauritz, Gert-Jan

    2016-01-01

    Study Objective To determine the diagnostic accuracy of ultrasound for detecting distal forearm fractures. Methods A systematic review and diagnostic meta-analysis was performed according to the PRISMA statement. We searched MEDLINE, Web of Science and the Cochrane Library from inception to September 2015. All prospective studies of the diagnostic accuracy of ultrasound versus radiography as the reference standard were included. We excluded studies with a retrospective design and those with evidence of verification bias. We assessed the methodological quality of the included studies with the QUADAS-2 tool. We performed a meta-analysis of studies evaluating ultrasound to calculate the pooled sensitivity and specificity with 95% confidence intervals (CI95%) using a bivariate model with random effects. Subgroup and sensitivity analysis were used to examine the effect of methodological differences and other study characteristics. Results Out of 867 publications we included 16 studies with 1,204 patients and 641 fractures. The pooled test characteristics for ultrasound were: sensitivity 97% (CI95% 93–99%), specificity 95% (CI95% 89–98%), positive likelihood ratio (LR) 20.0 (8.5–47.2) and negative LR 0.03 (0.01–0.08). The corresponding pooled diagnostic odds ratio (DOR) was 667 (142–3,133). Apparent differences were shown for method of viewing, with the 6-view method showing higher specificity, positive LR, and DOR, compared to the 4-view method. Conclusion The present meta-analysis showed that ultrasound has a high accuracy for the diagnosis of distal forearm fractures in children when used by proper viewing method. Based on this, ultrasound should be considered a reliable alternative, which has the advantages of being radiation free. PMID:27196439

  20. Monte Carlo capabilities of the SCALE code system

    DOE PAGES

    Rearden, Bradley T.; Petrie, Jr., Lester M.; Peplow, Douglas E.; ...

    2014-09-12

    SCALE is a broadly used suite of tools for nuclear systems modeling and simulation that provides comprehensive, verified and validated, user-friendly capabilities for criticality safety, reactor physics, radiation shielding, and sensitivity and uncertainty analysis. For more than 30 years, regulators, licensees, and research institutions around the world have used SCALE for nuclear safety analysis and design. SCALE provides a “plug-and-play” framework that includes three deterministic and three Monte Carlo radiation transport solvers that can be selected based on the desired solution, including hybrid deterministic/Monte Carlo simulations. SCALE includes the latest nuclear data libraries for continuous-energy and multigroup radiation transport asmore » well as activation, depletion, and decay calculations. SCALE’s graphical user interfaces assist with accurate system modeling, visualization, and convenient access to desired results. SCALE 6.2 will provide several new capabilities and significant improvements in many existing features, especially with expanded continuous-energy Monte Carlo capabilities for criticality safety, shielding, depletion, and sensitivity and uncertainty analysis. Finally, an overview of the Monte Carlo capabilities of SCALE is provided here, with emphasis on new features for SCALE 6.2.« less

  1. The diagnostic value of narrow-band imaging for early and invasive lung cancer: a meta-analysis.

    PubMed

    Zhu, Juanjuan; Li, Wei; Zhou, Jihong; Chen, Yuqing; Zhao, Chenling; Zhang, Ting; Peng, Wenjia; Wang, Xiaojing

    2017-07-01

    This study aimed to compare the ability of narrow-band imaging to detect early and invasive lung cancer with that of conventional pathological analysis and white-light bronchoscopy. We searched the PubMed, EMBASE, Sinomed, and China National Knowledge Infrastructure databases for relevant studies. Meta-disc software was used to perform data analysis, meta-regression analysis, sensitivity analysis, and heterogeneity testing, and STATA software was used to determine if publication bias was present, as well as to calculate the relative risks for the sensitivity and specificity of narrow-band imaging vs those of white-light bronchoscopy for the detection of early and invasive lung cancer. A random-effects model was used to assess the diagnostic efficacy of the above modalities in cases in which a high degree of between-study heterogeneity was noted with respect to their diagnostic efficacies. The database search identified six studies including 578 patients. The pooled sensitivity and specificity of narrow-band imaging were 86% (95% confidence interval: 83-88%) and 81% (95% confidence interval: 77-84%), respectively, and the pooled sensitivity and specificity of white-light bronchoscopy were 70% (95% confidence interval: 66-74%) and 66% (95% confidence interval: 62-70%), respectively. The pooled relative risks for the sensitivity and specificity of narrow-band imaging vs the sensitivity and specificity of white-light bronchoscopy for the detection of early and invasive lung cancer were 1.33 (95% confidence interval: 1.07-1.67) and 1.09 (95% confidence interval: 0.84-1.42), respectively, and sensitivity analysis showed that narrow-band imaging exhibited good diagnostic efficacy with respect to detecting early and invasive lung cancer and that the results of the study were stable. Narrow-band imaging was superior to white light bronchoscopy with respect to detecting early and invasive lung cancer; however, the specificities of the two modalities did not differ significantly.

  2. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection

    PubMed Central

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close “neighborhood” of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa. PMID:26327290

  3. Sensitivity Analysis of an ENteric Immunity SImulator (ENISI)-Based Model of Immune Responses to Helicobacter pylori Infection.

    PubMed

    Alam, Maksudul; Deng, Xinwei; Philipson, Casandra; Bassaganya-Riera, Josep; Bisset, Keith; Carbo, Adria; Eubank, Stephen; Hontecillas, Raquel; Hoops, Stefan; Mei, Yongguo; Abedi, Vida; Marathe, Madhav

    2015-01-01

    Agent-based models (ABM) are widely used to study immune systems, providing a procedural and interactive view of the underlying system. The interaction of components and the behavior of individual objects is described procedurally as a function of the internal states and the local interactions, which are often stochastic in nature. Such models typically have complex structures and consist of a large number of modeling parameters. Determining the key modeling parameters which govern the outcomes of the system is very challenging. Sensitivity analysis plays a vital role in quantifying the impact of modeling parameters in massively interacting systems, including large complex ABM. The high computational cost of executing simulations impedes running experiments with exhaustive parameter settings. Existing techniques of analyzing such a complex system typically focus on local sensitivity analysis, i.e. one parameter at a time, or a close "neighborhood" of particular parameter settings. However, such methods are not adequate to measure the uncertainty and sensitivity of parameters accurately because they overlook the global impacts of parameters on the system. In this article, we develop novel experimental design and analysis techniques to perform both global and local sensitivity analysis of large-scale ABMs. The proposed method can efficiently identify the most significant parameters and quantify their contributions to outcomes of the system. We demonstrate the proposed methodology for ENteric Immune SImulator (ENISI), a large-scale ABM environment, using a computational model of immune responses to Helicobacter pylori colonization of the gastric mucosa.

  4. Rapid Debris Analysis Project Task 3 Final Report - Sensitivity of Fallout to Source Parameters, Near-Detonation Environment Material Properties, Topography, and Meteorology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldstein, Peter

    2014-01-24

    This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.

  5. Polymorphisms of three genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system are not associated with blood pressure salt sensitivity: A systematic meta-analysis.

    PubMed

    Sun, Jiahong; Zhao, Min; Miao, Song; Xi, Bo

    2016-01-01

    Many studies have suggested that polymorphisms of three key genes (ACE, AGT and CYP11B2) in the renin-angiotensin-aldosterone system (RAAS) play important roles in the development of blood pressure (BP) salt sensitivity, but they have revealed inconsistent results. Thus, we performed a meta-analysis to clarify the association. PubMed and Embase databases were searched for eligible published articles. Fixed- or random-effect models were used to pool odds ratios and 95% confidence intervals based on whether there was significant heterogeneity between studies. In total, seven studies [237 salt-sensitive (SS) cases and 251 salt-resistant (SR) controls] for ACE gene I/D polymorphism, three studies (130 SS cases and 221 SR controls) for AGT gene M235T polymorphism and three studies (113 SS cases and 218 SR controls) for CYP11B2 gene C344T polymorphism were included in this meta-analysis. The results showed that there was no significant association between polymorphisms of these three polymorphisms in the RAAS and BP salt sensitivity under three genetic models (all p > 0.05). The meta-analysis suggested that three polymorphisms (ACE gene I/D, AGT gene M235T, CYP11B2 gene C344T) in the RAAS have no significant effect on BP salt sensitivity.

  6. Optical modeling of waveguide coupled TES detectors towards the SAFARI instrument for SPICA

    NASA Astrophysics Data System (ADS)

    Trappe, N.; Bracken, C.; Doherty, S.; Gao, J. R.; Glowacka, D.; Goldie, D.; Griffin, D.; Hijmering, R.; Jackson, B.; Khosropanah, P.; Mauskopf, P.; Morozov, D.; Murphy, A.; O'Sullivan, C.; Ridder, M.; Withington, S.

    2012-09-01

    The next generation of space missions targeting far-infrared wavelengths will require large-format arrays of extremely sensitive detectors. The development of Transition Edge Sensor (TES) array technology is being developed for future Far-Infrared (FIR) space applications such as the SAFARI instrument for SPICA where low-noise and high sensitivity is required to achieve ambitious science goals. In this paper we describe a modal analysis of multi-moded horn antennas feeding integrating cavities housing TES detectors with superconducting film absorbers. In high sensitivity TES detector technology the ability to control the electromagnetic and thermo-mechanical environment of the detector is critical. Simulating and understanding optical behaviour of such detectors at far IR wavelengths is difficult and requires development of existing analysis tools. The proposed modal approach offers a computationally efficient technique to describe the partial coherent response of the full pixel in terms of optical efficiency and power leakage between pixels. Initial wok carried out as part of an ESA technical research project on optical analysis is described and a prototype SAFARI pixel design is analyzed where the optical coupling between the incoming field and the pixel containing horn, cavity with an air gap, and thin absorber layer are all included in the model to allow a comprehensive optical characterization. The modal approach described is based on the mode matching technique where the horn and cavity are described in the traditional way while a technique to include the absorber was developed. Radiation leakage between pixels is also included making this a powerful analysis tool.

  7. Cost–effectiveness analysis of quadrivalent influenza vaccine in Spain

    PubMed Central

    García, Amos; Ortiz de Lejarazu, Raúl; Reina, Jordi; Callejo, Daniel; Cuervo, Jesús; Morano Larragueta, Raúl

    2016-01-01

    ABSTRACT Influenza has a major impact on healthcare systems and society, but can be prevented using vaccination. The World Health Organization (WHO) currently recommends that influenza vaccines should include at least two virus A and one virus B lineage (trivalent vaccine; TIV). A new quadrivalent vaccine (QIV), which includes an additional B virus strain, received regulatory approval and is now recommended by several countries. The present study estimates the cost-effectiveness of replacing TIVs with QIV for risk groups and elderly population in Spain. A static, lifetime, multi-cohort Markov model with a one-year cycle time was adapted to assess the costs and health outcomes associated with a switch from TIV to QIV. The model followed a cohort vaccinated each year according to health authority recommendations, for the duration of their lives. National epidemiological data allowed the determination of whether the B strain included in TIVs matched the circulating one. Societal perspective was considered, costs and outcomes were discounted at 3% and one-way and probabilistic sensitivity analyses were performed. Compared to TIVs, QIV reduced more influenza cases and influenza-related complications and deaths during periods of B-mismatch strains in the TIV. The incremental cost-effectiveness ratio (ICER) was 8,748€/quality-adjusted life year (QALY). One-way sensitivity analysis showed mismatch with the B lineage included in the TIV was the main driver for ICER. Probabilistic sensitivity analysis shows ICER below 30,000€/QALY in 96% of simulations. Replacing TIVs with QIV in Spain could improve influenza prevention by avoiding B virus mismatch and provide a cost-effective healthcare intervention. PMID:27184622

  8. Cost-effectiveness analysis of quadrivalent influenza vaccine in Spain.

    PubMed

    García, Amos; Ortiz de Lejarazu, Raúl; Reina, Jordi; Callejo, Daniel; Cuervo, Jesús; Morano Larragueta, Raúl

    2016-09-01

    Influenza has a major impact on healthcare systems and society, but can be prevented using vaccination. The World Health Organization (WHO) currently recommends that influenza vaccines should include at least two virus A and one virus B lineage (trivalent vaccine; TIV). A new quadrivalent vaccine (QIV), which includes an additional B virus strain, received regulatory approval and is now recommended by several countries. The present study estimates the cost-effectiveness of replacing TIVs with QIV for risk groups and elderly population in Spain. A static, lifetime, multi-cohort Markov model with a one-year cycle time was adapted to assess the costs and health outcomes associated with a switch from TIV to QIV. The model followed a cohort vaccinated each year according to health authority recommendations, for the duration of their lives. National epidemiological data allowed the determination of whether the B strain included in TIVs matched the circulating one. Societal perspective was considered, costs and outcomes were discounted at 3% and one-way and probabilistic sensitivity analyses were performed. Compared to TIVs, QIV reduced more influenza cases and influenza-related complications and deaths during periods of B-mismatch strains in the TIV. The incremental cost-effectiveness ratio (ICER) was 8,748€/quality-adjusted life year (QALY). One-way sensitivity analysis showed mismatch with the B lineage included in the TIV was the main driver for ICER. Probabilistic sensitivity analysis shows ICER below 30,000€/QALY in 96% of simulations. Replacing TIVs with QIV in Spain could improve influenza prevention by avoiding B virus mismatch and provide a cost-effective healthcare intervention.

  9. Synovial Fluid α-Defensin as a Biomarker for Peri-Prosthetic Joint Infection: A Systematic Review and Meta-Analysis.

    PubMed

    Li, Bin; Chen, Fei; Liu, Yi; Xu, Guokang

    Total joint arthroplasty (TJA) has been one of the most beneficial interventions for treating patients suffering from joint disorders. However, peri-prosthetic joint infection (PJI) is a serious complication that often accompanies TJA and the diagnosis of PJI is remains difficult. Questions remain regarding whether certain biomarkers can be valuable in the diagnosis of PJI. We conducted our systematic review by searching PubMed, Embase, Web of Science, the Cochrane Library, and Science Direct with the key words "periprosthetic joint infection," "synovial fluid," and "α-defensin." Studies that provided sufficient data to construct 2 × 2 contingency tables were chosen based on inclusion and exclusion criteria. The quality of included studies was assessed according to the revised Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) criteria. The pooled sensitivity, specificity, and diagnostic odds ratio (DOR) were calculated for the included studies. The summary receiver operating characteristic (SROC) curve and the area under the summary receiver operating characteristic (AUSROC) were used to evaluate the overall diagnostic performance. Eight studies were included in this systematic review. Among them four articles were included in meta-analysis. A total of 421 participants were studied in the meta-analysis. The pooled sensitivity, specificity, and DOR were 0.98 (95% confidence interval [CI]: 0.94-1.00), 0.97 (95% CI: 0.95-0.99), and 1095.49 (95% CI: 283.68.58-4230.45), respectively. The AUSROC was 0.9949 (standard error [SE] 0.0095). Synovial fluid α-defensin is a biomarker of high sensitivity and specificity for the diagnosis of PJI.

  10. Program Helps To Determine Chemical-Reaction Mechanisms

    NASA Technical Reports Server (NTRS)

    Bittker, D. A.; Radhakrishnan, K.

    1995-01-01

    General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code developed for use in solving complex, homogeneous, gas-phase, chemical-kinetics problems. Provides for efficient and accurate chemical-kinetics computations and provides for sensitivity analysis for variety of problems, including problems involving honisothermal conditions. Incorporates mathematical models for static system, steady one-dimensional inviscid flow, reaction behind incident shock wave (with boundary-layer correction), and perfectly stirred reactor. Computations of equilibrium properties performed for following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. Written in FORTRAN 77 with exception of NAMELIST extensions used for input.

  11. 15 CFR 990.52 - Injury assessment-quantification.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., the time for natural recovery without restoration, but including any response actions. The analysis of... injury; (2) The sensitivity and vulnerability of the injured natural resource and/or service; (3) The...

  12. New features and improved uncertainty analysis in the NEA nuclear data sensitivity tool (NDaST)

    NASA Astrophysics Data System (ADS)

    Dyrda, J.; Soppera, N.; Hill, I.; Bossant, M.; Gulliford, J.

    2017-09-01

    Following the release and initial testing period of the NEA's Nuclear Data Sensitivity Tool [1], new features have been designed and implemented in order to expand its uncertainty analysis capabilities. The aim is to provide a free online tool for integral benchmark testing, that is both efficient and comprehensive, meeting the needs of the nuclear data and benchmark testing communities. New features include access to P1 sensitivities for neutron scattering angular distribution [2] and constrained Chi sensitivities for the prompt fission neutron energy sampling. Both of these are compatible with covariance data accessed via the JANIS nuclear data software, enabling propagation of the resultant uncertainties in keff to a large series of integral experiment benchmarks. These capabilities are available using a number of different covariance libraries e.g., ENDF/B, JEFF, JENDL and TENDL, allowing comparison of the broad range of results it is possible to obtain. The IRPhE database of reactor physics measurements is now also accessible within the tool in addition to the criticality benchmarks from ICSBEP. Other improvements include the ability to determine and visualise the energy dependence of a given calculated result in order to better identify specific regions of importance or high uncertainty contribution. Sorting and statistical analysis of the selected benchmark suite is now also provided. Examples of the plots generated by the software are included to illustrate such capabilities. Finally, a number of analytical expressions, for example Maxwellian and Watt fission spectra will be included. This will allow the analyst to determine the impact of varying such distributions within the data evaluation, either through adjustment of parameters within the expressions, or by comparison to a more general probability distribution fitted to measured data. The impact of such changes is verified through calculations which are compared to a `direct' measurement found by adjustment of the original ENDF format file.

  13. 75 FR 1785 - Agrium Inc. and CF Industries Holding, Inc.; Analysis of the Agreement Containing Consent Orders...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-13

    ... number. Comments also should not include any sensitive health information, such as medical records or other individually identifiable health information. In addition, comments should not include any ``[t... fertilizers: nitrogen, phosphate, and potash, as well as control release fertilizers and micronutrients...

  14. Birth weight, current anthropometric markers, and high sensitivity C-reactive protein in Brazilian school children.

    PubMed

    Boscaini, Camile; Pellanda, Lucia Campos

    2015-01-01

    Studies have shown associations of birth weight with increased concentrations of high sensitivity C-reactive protein. This study assessed the relationship between birth weight, anthropometric and metabolic parameters during childhood, and high sensitivity C-reactive protein. A total of 612 Brazilian school children aged 5-13 years were included in the study. High sensitivity C-reactive protein was measured by particle-enhanced immunonephelometry. Nutritional status was assessed by body mass index, waist circumference, and skinfolds. Total cholesterol and fractions, triglycerides, and glucose were measured by enzymatic methods. Insulin sensitivity was determined by the homeostasis model assessment method. Statistical analysis included chi-square test, General Linear Model, and General Linear Model for Gamma Distribution. Body mass index, waist circumference, and skinfolds were directly associated with birth weight (P < 0.001, P = 0.001, and P = 0.015, resp.). Large for gestational age children showed higher high sensitivity C-reactive protein levels (P < 0.001) than small for gestational age. High birth weight is associated with higher levels of high sensitivity C-reactive protein, body mass index, waist circumference, and skinfolds. Large for gestational age altered high sensitivity C-reactive protein and promoted additional risk factor for atherosclerosis in these school children, independent of current nutritional status.

  15. Phase 1 of the automated array assembly task of the low cost silicon solar array project

    NASA Technical Reports Server (NTRS)

    Pryor, R. A.; Grenon, L. A.; Coleman, M. G.

    1978-01-01

    The results of a study of process variables and solar cell variables are presented. Interactions between variables and their effects upon control ranges of the variables are identified. The results of a cost analysis for manufacturing solar cells are discussed. The cost analysis includes a sensitivity analysis of a number of cost factors.

  16. System Sensitivity Analysis Applied to the Conceptual Design of a Dual-Fuel Rocket SSTO

    NASA Technical Reports Server (NTRS)

    Olds, John R.

    1994-01-01

    This paper reports the results of initial efforts to apply the System Sensitivity Analysis (SSA) optimization method to the conceptual design of a single-stage-to-orbit (SSTO) launch vehicle. SSA is an efficient, calculus-based MDO technique for generating sensitivity derivatives in a highly multidisciplinary design environment. The method has been successfully applied to conceptual aircraft design and has been proven to have advantages over traditional direct optimization methods. The method is applied to the optimization of an advanced, piloted SSTO design similar to vehicles currently being analyzed by NASA as possible replacements for the Space Shuttle. Powered by a derivative of the Russian RD-701 rocket engine, the vehicle employs a combination of hydrocarbon, hydrogen, and oxygen propellants. Three primary disciplines are included in the design - propulsion, performance, and weights & sizing. A complete, converged vehicle analysis depends on the use of three standalone conceptual analysis computer codes. Efforts to minimize vehicle dry (empty) weight are reported in this paper. The problem consists of six system-level design variables and one system-level constraint. Using SSA in a 'manual' fashion to generate gradient information, six system-level iterations were performed from each of two different starting points. The results showed a good pattern of convergence for both starting points. A discussion of the advantages and disadvantages of the method, possible areas of improvement, and future work is included.

  17. The diagnostic performance of shear wave elastography for malignant cervical lymph nodes: A systematic review and meta-analysis.

    PubMed

    Suh, Chong Hyun; Choi, Young Jun; Baek, Jung Hwan; Lee, Jeong Hyun

    2017-01-01

    To evaluate the diagnostic performance of shear wave elastography for malignant cervical lymph nodes. We searched the Ovid-MEDLINE and EMBASE databases for published studies regarding the use of shear wave elastography for diagnosing malignant cervical lymph nodes. The diagnostic performance of shear wave elastography was assessed using bivariate modelling and hierarchical summary receiver operating characteristic modelling. Meta-regression analysis and subgroup analysis according to acoustic radiation force impulse imaging (ARFI) and Supersonic shear imaging (SSI) were also performed. Eight eligible studies which included a total sample size of 481 patients with 647 cervical lymph nodes, were included. Shear wave elastography showed a summary sensitivity of 81 % (95 % CI: 72-88 %) and specificity of 85 % (95 % CI: 70-93 %). The results of meta-regression analysis revealed that the prevalence of malignant lymph nodes was a significant factor affecting study heterogeneity (p < .01). According to the subgroup analysis, the summary estimates of the sensitivity and specificity did not differ between ARFI and SSI (p = .93). Shear wave elastography is an acceptable imaging modality for diagnosing malignant cervical lymph nodes. We believe that both ARFI and SSI may have a complementary role for diagnosing malignant cervical lymph nodes. • Shear wave elastography is acceptable modality for diagnosing malignant cervical lymph nodes. • Shear wave elastography demonstrated summary sensitivity of 81 % and specificity of 85 %. • ARFI and SSI have complementary roles for diagnosing malignant cervical lymph nodes.

  18. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  19. The BCL2 antagonist of cell death pathway influences endometrial cancer cell sensitivity to cisplatin.

    PubMed

    Chon, Hye Sook; Marchion, Douglas C; Xiong, Yin; Chen, Ning; Bicaku, Elona; Stickles, Xiaomang Ba; Bou Zgheib, Nadim; Judson, Patricia L; Hakam, Ardeshir; Gonzalez-Bosquet, Jesus; Wenham, Robert M; Apte, Sachin M; Lancaster, Johnathan M

    2012-01-01

    To identify pathways that influence endometrial cancer (EC) cell sensitivity to cisplatin and to characterize the BCL2 antagonist of cell death (BAD) pathway as a therapeutic target to increase cisplatin sensitivity. Eight EC cell lines (Ishikawa, MFE296, RL 95-2, AN3CA, KLE, MFE280, MFE319, HEC-1-A) were subjected to Affymetrix Human U133A GeneChip expression analysis of approximately 22,000 probe sets. In parallel, endometrial cell line sensitivity to cisplatin was quantified by MTS assay, and IC(50) values were calculated. Pearson's correlation test was used to identify genes associated with response to cisplatin. Genes associated with cisplatin responsiveness were subjected to pathway analysis. The BAD pathway was identified and subjected to targeted modulation, and the effect on cisplatin sensitivity was evaluated. Pearson's correlation analysis identified 1443 genes associated with cisplatin resistance (P<0.05), which included representation of the BAD-apoptosis pathway. Small interfering RNA (siRNA) knockdown of BAD pathway protein phosphatase PP2C expression was associated with increased phosphorylated BAD (serine-155) levels and a parallel increase in cisplatin resistance in Ishikawa (P=0.004) and HEC-1-A (P=0.02) cell lines. In contrast, siRNA knockdown of protein kinase A expression increased cisplatin sensitivity in the Ishikawa (P=0.02) cell line. The BAD pathway influences EC cell sensitivity to cisplatin, likely via modulation of the phosphorylation status of the BAD protein. The BAD pathway represents an appealing therapeutic target to increase EC cell sensitivity to cisplatin. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Sensitivity Analysis in Engineering

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)

    1987-01-01

    The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.

  1. Evaluating aquatic invertebrate vulnerability to insecticides based on intrinsic sensitivity, biological traits, and toxic mode of action.

    PubMed

    Rico, Andreu; Van den Brink, Paul J

    2015-08-01

    In the present study, the authors evaluated the vulnerability of aquatic invertebrates to insecticides based on their intrinsic sensitivity and their population-level recovery potential. The relative sensitivity of invertebrates to 5 different classes of insecticides was calculated at the genus, family, and order levels using the acute toxicity data available in the US Environmental Protection Agency ECOTOX database. Biological trait information was linked to the calculated relative sensitivity to evaluate correlations between traits and sensitivity and to calculate a vulnerability index, which combines intrinsic sensitivity and traits describing the recovery potential of populations partially exposed to insecticides (e.g., voltinism, flying strength, occurrence in drift). The analysis shows that the relative sensitivity of arthropods depends on the insecticide mode of action. Traits such as degree of sclerotization, size, and respiration type showed good correlation to sensitivity and can be used to make predictions for invertebrate taxa without a priori sensitivity knowledge. The vulnerability analysis revealed that some of the Ephemeroptera, Plecoptera, and Trichoptera taxa were vulnerable to all insecticide classes and indicated that particular gastropod and bivalve species were potentially vulnerable. Microcrustaceans (e.g., daphnids, copepods) showed low potential vulnerability, particularly in lentic ecosystems. The methods described in the present study can be used for the selection of focal species to be included as part of ecological scenarios and higher tier risk assessments. © 2015 SETAC.

  2. Sensitivity analysis of periodic errors in heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  3. Accuracy of screening women at familial risk of breast cancer without a known gene mutation: Individual patient data meta-analysis.

    PubMed

    Phi, Xuan-Anh; Houssami, Nehmat; Hooning, Maartje J; Riedl, Christopher C; Leach, Martin O; Sardanelli, Francesco; Warner, Ellen; Trop, Isabelle; Saadatmand, Sepideh; Tilanus-Linthorst, Madeleine M A; Helbich, Thomas H; van den Heuvel, Edwin R; de Koning, Harry J; Obdeijn, Inge-Marie; de Bock, Geertruida H

    2017-11-01

    Women with a strong family history of breast cancer (BC) and without a known gene mutation have an increased risk of developing BC. We aimed to investigate the accuracy of screening using annual mammography with or without magnetic resonance imaging (MRI) for these women outside the general population screening program. An individual patient data (IPD) meta-analysis was conducted using IPD from six prospective screening trials that had included women at increased risk for BC: only women with a strong familial risk for BC and without a known gene mutation were included in this analysis. A generalised linear mixed model was applied to estimate and compare screening accuracy (sensitivity, specificity and predictive values) for annual mammography with or without MRI. There were 2226 women (median age: 41 years, interquartile range 35-47) with 7478 woman-years of follow-up, with a BC rate of 12 (95% confidence interval 9.3-14) in 1000 woman-years. Mammography screening had a sensitivity of 55% (standard error of mean [SE] 7.0) and a specificity of 94% (SE 1.3). Screening with MRI alone had a sensitivity of 89% (SE 4.6) and a specificity of 83% (SE 2.8). Adding MRI to mammography increased sensitivity to 98% (SE 1.8, P < 0.01 compared to mammography alone) but lowered specificity to 79% (SE 2.7, P < 0.01 compared with mammography alone). In this population of women with strong familial BC risk but without a known gene mutation, in whom BC incidence was high both before and after age 50, adding MRI to mammography substantially increased screening sensitivity but also decreased its specificity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.

    PubMed

    Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael

    2012-07-01

    To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Diagnostic performance of matrix-assisted laser desorption ionisation time-of-flight mass spectrometry in blood bacterial infections: a systematic review and meta-analysis.

    PubMed

    Scott, Jamie S; Sterling, Sarah A; To, Harrison; Seals, Samantha R; Jones, Alan E

    2016-07-01

    Matrix-assisted laser desorption ionisation time-of-flight mass spectrometry (MALDI-TOF MS) has shown promise in decreasing time to identification of causative organisms compared to traditional methods; however, the utility of MALDI-TOF MS in a heterogeneous clinical setting is uncertain. To perform a systematic review on the operational performance of the Bruker MALDI-TOF MS system and evaluate published cut-off values compared to traditional blood cultures. A comprehensive literature search was performed. Studies were included if they performed direct MALDI-TOF MS analysis of blood culture specimens in human patients with suspected bacterial infections using the Bruker Biotyper software. Sensitivities and specificities of the combined studies were estimated using a hierarchical random effects linear model (REML) incorporating cut-off scores of ≥1.7 and ≥2.0. Fifty publications were identified, with 11 studies included after final review. The estimated sensitivity utilising a cut-off of ≥2.0 from the combined studies was 74.6% (95% CI = 67.9-89.3%), with an estimated specificity of 88.0% (95% CI = 74.8-94.7%). When assessing a cut-off of ≥1.7, the combined sensitivity increases to 92.8% (95% CI = 87.4-96.0%), but the estimated specificity decreased to 81.2% (95% CI = 61.9-96.6%). In this analysis, MALDI-TOF MS showed acceptable sensitivity and specificity in bacterial speciation with the current recommended cut-off point compared to blood cultures; however, lowering the cut-off point from ≥2.0 to ≥1.7 would increase the sensitivity of the test without significant detrimental effect on the specificity, which could improve clinician confidence in their results.

  6. Neurobehavioral deficits, diseases, and associated costs of exposure to endocrine-disrupting chemicals in the European Union.

    PubMed

    Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo

    2015-04-01

    Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.

  7. Analysis strategies for high-resolution UHF-fMRI data.

    PubMed

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Sensitivity studies of pediatric material properties on juvenile lumbar spine responses using finite element analysis.

    PubMed

    Jebaseelan, D Davidson; Jebaraj, C; Yoganandan, Narayan; Rajasekaran, S; Kanna, Rishi M

    2012-05-01

    The objective of the study was to determine the sensitivity of material properties of the juvenile spine to its external and internal responses using a finite element model under compression, and flexion-extension bending moments. The methodology included exercising the 8-year-old juvenile lumbar spine using parametric procedures. The model included the vertebral centrum, growth plates, laminae, pedicles, transverse processes and spinous processes; disc annulus and nucleus; and various ligaments. The sensitivity analysis was conducted by varying the modulus of elasticity for various components. The first simulation was done using mean material properties. Additional simulations were done for each component corresponding to low and high material property variations. External displacement/rotation and internal stress-strain responses were determined under compression and flexion-extension bending. Results indicated that, under compression, disc properties were more sensitive than bone properties, implying an elevated role of the disc under this mode. Under flexion-extension moments, ligament properties were more dominant than the other components, suggesting that various ligaments of the juvenile spine play a key role in modulating bending behaviors. Changes in the growth plate stress associated with ligament properties explained the importance of the growth plate in the pediatric spine with potential implications in progressive deformities.

  9. Visual Resource Analysis for Solar Energy Zones in the San Luis Valley

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sullivan, Robert; Abplanalp, Jennifer M.; Zvolanek, Emily

    This report summarizes the results of a study conducted by Argonne National Laboratory’s (Argonne’s) Environmental Science Division for the U.S. Department of the Interior Bureau of Land Management (BLM). The study analyzed the regional effects of potential visual impacts of solar energy development on three BLM-designated solar energy zones (SEZs) in the San Luis Valley (SLV) in Colorado, and, based on the analysis, made recommendations for or against regional compensatory mitigation to compensate residents and other stakeholders for the potential visual impacts to the SEZs. The analysis was conducted as part of the solar regional mitigation strategy (SRMS) task conductedmore » by BLM Colorado with assistance from Argonne. Two separate analyses were performed. The first analysis, referred to as the VSA Analysis, analyzed the potential visual impacts of solar energy development in the SEZs on nearby visually sensitive areas (VSAs), and, based on the impact analyses, made recommendations for or against regional compensatory mitigation. VSAs are locations for which some type of visual sensitivity has been identified, either because the location is an area of high scenic value or because it is a location from which people view the surrounding landscape and attach some level of importance or sensitivity to what is seen from the location. The VSA analysis included both BLM-administered lands in Colorado and in the Taos FO in New Mexico. The second analysis, referred to as the SEZ Analysis, used BLM visual resource inventory (VRI) and other data on visual resources in the former Saguache and La Jara Field Offices (FOs), now contained within the San Luis Valley FO (SLFO), to determine whether the changes in scenic values that would result from the development of utility-scale solar energy facilities in the SEZs would affect the quality and quantity of valued scenic resources in the SLV region as a whole. If the regional effects were judged to be significant, regional compensatory mitigation was recommended. VRI data was not available for the Taos FO and it was not included in the SEZ analysis; the SEZ analysis includes BLM-administered lands in Colorado only.« less

  10. Novel and Practical Scoring Systems for the Diagnosis of Thyroid Nodules

    PubMed Central

    Wei, Ying; Zhou, Xinrong; Liu, Siyue; Wang, Hong; Liu, Limin; Liu, Renze; Kang, Jinsong; Hong, Kai; Wang, Daowen; Yuan, Gang

    2016-01-01

    Objective The clinical management of patients with thyroid nodules that are biopsied by fine-needle aspiration cytology and yield indeterminate results remains unsettled. The BRAF V600E mutation has dubious diagnostic value due to its low sensitivity. Novel strategies are urgently needed to distinguish thyroid malignancies from thyroid nodules. Design This prospective study included 504 thyroid nodules diagnosed by ultrasonography from 468 patients, and fine-needle aspiration cytology was performed under ultrasound guidance. Cytology and molecular analysis, including BRAF V600E, RET/PTC1 and RET/PTC3, were conducted simultaneously. The cytology, ultrasonography results, and mutational status were gathered and analyzed together. Predictive scoring systems were designed using a combination of diagnostic parameters for ultrasonography, cytology and genetic analysis. The utility of the scoring systems was analyzed and compared to detection using the individual methods alone or combined. Result The sensitivity of scoring systema (ultrasonography, cytology, BRAF V600E, RET/PTC) was nearly identical to that of scoring systemb (ultrasonography, cytology, BRAF V600E); these were 91.0% and 90.2%, respectively. These sensitivities were significantly higher than those obtained using FNAC, genetic analysis and US alone or combined; their sensitivities were 63.9%, 70.7% and 87.2%, respectively. Scoring systemc (ultrasonography, cytology) was slightly inferior to the former two scoring systems but still had relatively high sensitivity and specificity (80.5% and 95.1%, respectively), which were significantly superior to those of single cytology, ultrasonography or genetic analysis. In nodules with uncertainty cytology, scoring systema, scoring systemb and scoring systemc could elevate the malignancy detection rates to 69.7%, 69.7% and 63.6%, respectively. Conclusion These three scoring systems were quick for clinicians to master and could provide quantified information to predict the probability of malignant nodules. Scoring systemb is recommended for improving the detection rate among nodules of uncertain cytology. PMID:27654865

  11. Comparison of Heidelberg Retina Tomograph-3 glaucoma probability score and Moorfields regression analysis of optic nerve head in glaucoma patients and healthy individuals.

    PubMed

    Caglar, Çagatay; Gul, Adem; Batur, Muhammed; Yasar, Tekin

    2017-01-01

    To compare the sensitivity and specificity of Moorfields regression analysis (MRA) and glaucoma probability score (GPS) between healthy and glaucomatous eyes with Heidelberg Retinal Tomograph 3 (HRT-3). The study included 120 eyes of 75 glaucoma patients and 138 eyes of 73 normal subjects, for a total of 258 eyes of 148 individuals. All measurements were performed with the HRT-3. Diagnostic test criteria (sensitivity, specificity, etc.) were used to evaluate how efficiently GPS and MRA algorithms in the HRT-3 discriminated between the glaucoma and control groups. The GPS showed 88 % sensitivity and 66 % specificity, whereas MRA had 71.5 % sensitivity and 82.5 % specificity. There was 71 % agreement between the final results of MRA and GPS in the glaucoma group. Excluding borderline patients from both analyses resulted in 91.6 % agreement. In the control group the level of agreement between MRA and GPS was 64 % including borderline patients and 84.1 % after excluding borderline patients. The accuracy rate is 92 % for MRA and 91 % for GPS in the glaucoma group excluding borderline patients. The difference was nor statistically different. In both cases, agreement was higher between MRA and GPS in the glaucoma group. We found that both sensitivity and specificity increased with disc size for MRA, while the sensitivity increased and specificity decreased with larger disc sizes for GPS. HRT is able to quantify and clearly reveal structural changes in the ONH and RNFL in glaucoma.

  12. Identifying significant environmental features using feature recognition.

    DOT National Transportation Integrated Search

    2015-10-01

    The Department of Environmental Analysis at the Kentucky Transportation Cabinet has expressed an interest in feature-recognition capability because it may help analysts identify environmentally sensitive features in the landscape, : including those r...

  13. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis.

    PubMed

    de Oliveira Azevedo, Christianne Terra; do Brasil, Pedro Emmanuel A A; Guida, Letícia; Lopes Moreira, Maria Elizabeth

    2016-01-01

    Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests' use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests' sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I(2)). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis.

  14. Performance of Polymerase Chain Reaction Analysis of the Amniotic Fluid of Pregnant Women for Diagnosis of Congenital Toxoplasmosis: A Systematic Review and Meta-Analysis

    PubMed Central

    2016-01-01

    Introduction Congenital infection caused by Toxoplasma gondii can cause serious damage that can be diagnosed in utero or at birth, although most infants are asymptomatic at birth. Prenatal diagnosis of congenital toxoplasmosis considerably improves the prognosis and outcome for infected infants. For this reason, an assay for the quick, sensitive, and safe diagnosis of fetal toxoplasmosis is desirable. Goal To systematically review the performance of polymerase chain reaction (PCR) analysis of the amniotic fluid of pregnant women with recent serological toxoplasmosis diagnoses for the diagnosis of fetal toxoplasmosis. Method A systematic literature review was conducted via a search of electronic databases; the literature included primary studies of the diagnostic accuracy of PCR analysis of amniotic fluid from pregnant women who seroconverted during pregnancy. The PCR test was compared to a gold standard for diagnosis. Results A total of 1.269 summaries were obtained from the electronic database and reviewed, and 20 studies, comprising 4.171 samples, met the established inclusion criteria and were included in the review. The following results were obtained: studies about PCR assays for fetal toxoplasmosis are generally susceptible to bias; reports of the tests’ use lack critical information; the protocols varied among studies; the heterogeneity among studies was concentrated in the tests’ sensitivity; there was evidence that the sensitivity of the tests increases with time, as represented by the trimester; and there was more heterogeneity among studies in which there was more time between maternal diagnosis and fetal testing. The sensitivity of the method, if performed up to five weeks after maternal diagnosis, was 87% and specificity was 99%. Conclusion The global sensitivity heterogeneity of the PCR test in this review was 66.5% (I2). The tests show low evidence of heterogeneity with a sensitivity of 87% and specificity of 99% when performed up to five weeks after maternal diagnosis. The test has a known performance and could be recommended for use up to five weeks after maternal diagnosis, when there is suspicion of fetal toxoplasmosis. PMID:27055272

  15. A method for high-throughput, sensitive analysis of IgG Fc and Fab glycosylation by capillary electrophoresis.

    PubMed

    Mahan, Alison E; Tedesco, Jacquelynne; Dionne, Kendall; Baruah, Kavitha; Cheng, Hao D; De Jager, Philip L; Barouch, Dan H; Suscovich, Todd; Ackerman, Margaret; Crispin, Max; Alter, Galit

    2015-02-01

    The N-glycan of the IgG constant region (Fc) plays a central role in tuning and directing multiple antibody functions in vivo, including antibody-dependent cellular cytotoxicity, complement deposition, and the regulation of inflammation, among others. However, traditional methods of N-glycan analysis, including HPLC and mass spectrometry, are technically challenging and ill suited to handle the large numbers of low concentration samples analyzed in clinical or animal studies of the N-glycosylation of polyclonal IgG. Here we describe a capillary electrophoresis-based technique to analyze plasma-derived polyclonal IgG-glycosylation quickly and accurately in a cost-effective, sensitive manner that is well suited for high-throughput analyses. Additionally, because a significant fraction of polyclonal IgG is glycosylated on both Fc and Fab domains, we developed an approach to separate and analyze domain-specific glycosylation in polyclonal human, rhesus and mouse IgGs. Overall, this protocol allows for the rapid, accurate, and sensitive analysis of Fc-specific IgG glycosylation, which is critical for population-level studies of how antibody glycosylation may vary in response to vaccination or infection, and across disease states ranging from autoimmunity to cancer in both clinical and animal studies. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Meta-Analysis of Predictive Significance of the Black Hole Sign for Hematoma Expansion in Intracerebral Hemorrhage.

    PubMed

    Zheng, Jun; Yu, Zhiyuan; Guo, Rui; Li, Hao; You, Chao; Ma, Lu

    2018-04-27

    Hematoma expansion is related to unfavorable prognosis in intracerebral hemorrhage (ICH). The black hole sign is a novel marker on non-contrast computed tomography for predicting hematoma expansion. However, its predictive values are different in previous studies. Thus, this meta-analysis was conducted to evaluate the predictive significance of the black hole sign for hematoma expansion in ICH. A systematic literature search was performed. Original researches on the association between the black hole sign and hematoma expansion in ICH were included. Sensitivity and specificity were pooled to assess the predictive accuracy. Summary receiver operating characteristics curve (SROC) was developed. Deeks' funnel plot asymmetry test was used to assess the publication bias. Five studies with a total of 1495 patients were included in this study. The pooled sensitivity and specificity of the black hole sign for predicting hematoma expansion were 0.30 and 0.91, respectively. The area under the curve was 0.78 in SROC curve. There was no significant publication bias. This meta-analysis shows that the black hole sign is a helpful imaging marker for predicting hematoma expansion in ICH. Although the black hole sign has a relatively low sensitivity, its specificity is relatively high. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  18. Sensitivity Analysis Reveals Critical Factors that Affect Wetland Methane Emissions using Soil Biogeochemistry Model

    NASA Astrophysics Data System (ADS)

    Alonso-Contes, C.; Gerber, S.; Bliznyuk, N.; Duerr, I.

    2017-12-01

    Wetlands contribute approximately 20 to 40 % to global sources of methane emissions. We build a Methane model for tropical and subtropical forests, that allows inundated conditions, following the approaches used in more complex global biogeochemical emission models (LPJWhyMe and CLM4Me). The model was designed to replace model formulations with field and remotely sensed collected data for 2 essential drivers: plant productivity and hydrology. This allows us to directly focus on the central processes of methane production, consumption and transport. One of our long term goals is to make the model available to a scientists interested in including methane modeling in their location of study. Sensitivity analysis results help in focusing field data collection efforts. Here, we present results from a pilot global sensitivity analysis of the model order to determine which parameters and processes contribute most to the model's uncertainty of methane emissions. Results show that parameters related to water table behavior, carbon input (in form of plant productivity) and rooting depth affect simulated methane emissions the most. Current efforts include to perform the sensitivity analysis again on methane emissions outputs from an updated model that incorporates a soil heat flux routine and to determine the extent by which the soil temperature parameters affect CH4 emissions. Currently we are conducting field collection of data during Summer 2017 for comparison among 3 different landscapes located in the Ordway-Swisher Biological Station in Melrose, FL. We are collecting soil moisture and CH4 emission data from 4 different wetland types. Having data from 4 wetland types allows for calibration of the model to diverse soil, water and vegetation characteristics.

  19. Screening Performance Characteristic of Ultrasonography and Radiography in Detection of Pleural Effusion; a Meta-Analysis.

    PubMed

    Yousefifard, Mahmoud; Baikpour, Masoud; Ghelichkhani, Parisa; Asady, Hadi; Shahsavari Nia, Kavous; Moghadas Jafari, Ali; Hosseini, Mostafa; Safari, Saeed

    2016-01-01

    The role of ultrasonography in detection of pleural effusion has long been a subject of interest but controversial results have been reported. Accordingly, this study aims to conduct a systematic review of the available literature on diagnostic value of ultrasonography and radiography in detection of pleural effusion through a meta-analytic approach. An extended search was done in databases of Medline, EMBASE, ISI Web of Knowledge, Scopus, Cochrane Library, and ProQuest. Two reviewers independently extracted the data and assessed the quality of the articles. Meta-analysis was performed using a mixed-effects binary regression model. Finally, subgroup analysis was carried out in order to find the sources of heterogeneity between the included studies. 12 studies were included in this meta-analysis (1554 subjects, 58.6% male). Pooled sensitivity of ultrasonography in detection of pleural effusion was 0.94 (95% CI: 0.88-0.97; I2= 84.23, p<0.001) and its pooled specificity was calculated to be 0.98 (95% CI: 0.92-1.0; I2= 88.65, p<0.001), while sensitivity and specificity of chest radiography were 0.51 (95% CI: 0.33-0.68; I2= 91.76, p<0.001) and 0.91 (95% CI: 0.68-0.98; I2= 92.86, p<0.001), respectively. Sensitivity of ultrasonography was found to be higher when the procedure was carried out by an intensivist or a radiologist using 5-10 MHz transducers. Chest ultrasonography, as a screening tool, has a higher diagnostic accuracy in identification of plural effusion compared to radiography. The sensitivity of this imaging modality was found to be higher when performed by a radiologist or an intensivist and using 5-10MHz probes.

  20. Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism

    PubMed Central

    Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F

    2017-01-01

    Objective The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Methods Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). Results The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). Conclusion In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. Trial registration number NCT00986154. PMID:28689179

  1. Additional EIPC Study Analysis. Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadley, Stanton W; Gotham, Douglas J.; Luciani, Ralph L.

    Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 14 topics was developed for further analysis. This paper brings together the earlier interim reports of the first 13 topics plus one additional topic into a single final report.« less

  2. The Relationship Between Serum Endocan Levels With the Presence of Slow Coronary Flow: A Cross-Sectional Study.

    PubMed

    Kundi, Harun; Gok, Murat; Kiziltunc, Emrullah; Topcuoglu, Canan; Cetin, Mustafa; Cicekcioglu, Hulya; Ugurlu, Burcu; Ulusoy, Feridun Vasfi

    2017-07-01

    The aim of this study was to investigate the relationship between endocan levels with the presence of slow coronary flow (SCF). In this cross-sectional study, a total of 88 patients, who admitted to our hospital, were included in this study. Of these, 53 patients with SCF and 35 patients with normal coronary flow were included in the final analysis. Coronary flow rates of all patients were determined by the Timi Frame Count (TFC) method. In correlation analysis, endocan levels revealed a significantly positive correlation with high sensitive C-reactive protein and corrected TFC. In multivariate logistic regression analysis, the endocan levels were found as independently associated with the presence of SCF. Finally, using a cutoff level of 2.3, endocan level predicted the presence of SCF with a sensitivity of 77.2% and specificity of 75.2%. In conclusion, our study showed that higher endocan levels were significantly and independently related to the presence of SCF.

  3. Sensitivity to Mental Effort and Test-Retest Reliability of Heart Rate Variability Measures in Healthy Seniors

    PubMed Central

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P.; Oken, Barry S.

    2011-01-01

    Objectives To determine 1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and 2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Methods Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings two weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Results Time domain (especially mean R-R interval/RRI), frequency domain and, among nonlinear parameters- Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Conclusions Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. Significance A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. PMID:21459665

  4. Recent approaches for enhancing sensitivity in enantioseparations by CE.

    PubMed

    Sánchez-Hernández, Laura; García-Ruiz, Carmen; Luisa Marina, María; Luis Crego, Antonio

    2010-01-01

    This article reviews the latest methodological and instrumental improvements for enhancing sensitivity in chiral analysis by CE. The review covers literature from March 2007 until May 2009, that is, the works published after the appearance of the latest review article on the same topic by Sánchez-Hernández et al. [Electrophoresis 2008, 29, 237-251]. Off-line and on-line sample treatment techniques, on-line sample preconcentration strategies based on electrophoretic and chromatographic principles, and alternative detection systems to the widely employed UV/Vis detection in CE are the most relevant approaches discussed for improving sensitivity. Microchip technologies are also included since they can open up great possibilities to achieve sensitive and fast enantiomeric separations.

  5. [Dealing with sensitive interview topics--insights into the research project "Everyday life of people with urinary incontinence"].

    PubMed

    Hayder, Daniela; Cintron, Alexa; Schnell, Martin W; Schnepp, Wilfried

    2009-10-01

    This article has been written as part of a research project investigating the experiences of people with urinary incontinence. In this article a systematic literature analysis combined with excerpts from the study was used to describe and reflect on the best way to conduct interviews on sensitive topics. Ethical aspects are emphasised. These include informed and process consent, different types, places, and phases of such an interview, and reasons for people to participate in such interviews. It is shown that grappling with sensitive and shameful topics can promote recruitment of potential candidates and add depth to qualitative research. As such, sensitive interview topics constitute important quality indicators for qualitative research.

  6. HPAEC-PAD for oligosaccharide analysis-novel insights into analyte sensitivity and response stability.

    PubMed

    Mechelke, Matthias; Herlet, Jonathan; Benz, J Philipp; Schwarz, Wolfgang H; Zverlov, Vladimir V; Liebl, Wolfgang; Kornberger, Petra

    2017-12-01

    The rising importance of accurately detecting oligosaccharides in biomass hydrolyzates or as ingredients in food, such as in beverages and infant milk products, demands for the availability of tools to sensitively analyze the broad range of available oligosaccharides. Over the last decades, HPAEC-PAD has been developed into one of the major technologies for this task and represents a popular alternative to state-of-the-art LC-MS oligosaccharide analysis. This work presents the first comprehensive study which gives an overview of the separation of 38 analytes as well as enzymatic hydrolyzates of six different polysaccharides focusing on oligosaccharides. The high sensitivity of the PAD comes at cost of its stability due to recession of the gold electrode. By an in-depth analysis of the sensitivity drop over time for 35 analytes, including xylo- (XOS), arabinoxylo- (AXOS), laminari- (LOS), manno- (MOS), glucomanno- (GMOS), and cellooligosaccharides (COS), we developed an analyte-specific one-phase decay model for this effect over time. Using this model resulted in significantly improved data normalization when using an internal standard. Our results thereby allow a quantification approach which takes the inevitable and analyte-specific PAD response drop into account. Graphical abstract HPAEC-PAD analysis of oligosaccharides and determination of PAD response drop leading to an improved data normalization.

  7. A review of optimization and quantification techniques for chemical exchange saturation transfer (CEST) MRI toward sensitive in vivo imaging

    PubMed Central

    Guo, Yingkun; Zheng, Hairong; Sun, Phillip Zhe

    2015-01-01

    Chemical exchange saturation transfer (CEST) MRI is a versatile imaging method that probes the chemical exchange between bulk water and exchangeable protons. CEST imaging indirectly detects dilute labile protons via bulk water signal changes following selective saturation of exchangeable protons, which offers substantial sensitivity enhancement and has sparked numerous biomedical applications. Over the past decade, CEST imaging techniques have rapidly evolved due to contributions from multiple domains, including the development of CEST mathematical models, innovative contrast agent designs, sensitive data acquisition schemes, efficient field inhomogeneity correction algorithms, and quantitative CEST (qCEST) analysis. The CEST system that underlies the apparent CEST-weighted effect, however, is complex. The experimentally measurable CEST effect depends not only on parameters such as CEST agent concentration, pH and temperature, but also on relaxation rate, magnetic field strength and more importantly, experimental parameters including repetition time, RF irradiation amplitude and scheme, and image readout. Thorough understanding of the underlying CEST system using qCEST analysis may augment the diagnostic capability of conventional imaging. In this review, we provide a concise explanation of CEST acquisition methods and processing algorithms, including their advantages and limitations, for optimization and quantification of CEST MRI experiments. PMID:25641791

  8. Diagnostic Performance of DNA Hypermethylation Markers in Peripheral Blood for the Detection of Colorectal Cancer: A Meta-Analysis and Systematic Review

    PubMed Central

    Li, Bingsheng; Gan, Aihua; Chen, Xiaolong; Wang, Xinying; He, Weifeng; Zhang, Xiaohui; Huang, Renxiang; Zhou, Shuzhu; Song, Xiaoxiao; Xu, Angao

    2016-01-01

    DNA hypermethylation in blood is becoming an attractive candidate marker for colorectal cancer (CRC) detection. To assess the diagnostic accuracy of blood hypermethylation markers for CRC in different clinical settings, we conducted a meta-analysis of published reports. Of 485 publications obtained in the initial literature search, 39 studies were included in the meta-analysis. Hypermethylation markers in peripheral blood showed a high degree of accuracy for the detection of CRC. The summary sensitivity was 0.62 [95% confidence interval (CI), 0.56–0.67] and specificity was 0.91 (95% CI, 0.89–0.93). Subgroup analysis showed significantly greater sensitivity for the methylated Septin 9 gene (SEPT9) subgroup (0.75; 95% CI, 0.67–0.81) than for the non-methylated SEPT9 subgroup (0.58; 95% CI, 0.52–0.64). Sensitivity and specificity were not affected significantly by target gene number, CRC staging, study region, or methylation analysis method. These findings show that hypermethylation markers in blood are highly sensitive and specific for CRC detection, with methylated SEPT9 being particularly robust. The diagnostic performance of hypermethylation markers, which have varied across different studies, can be improved by marker optimization. Future research should examine variation in diagnostic accuracy according to non-neoplastic factors. PMID:27158984

  9. Effect of a single session of muscle-biased therapy on pain sensitivity: a systematic review and meta-analysis of randomized controlled trials

    PubMed Central

    Gay, Charles W; Alappattu, Meryl J; Coronado, Rogelio A; Horn, Maggie E; Bishop, Mark D

    2013-01-01

    Background Muscle-biased therapies (MBT) are commonly used to treat pain, yet several reviews suggest evidence for the clinical effectiveness of these therapies is lacking. Inadequate treatment parameters have been suggested to account for inconsistent effects across studies. Pain sensitivity may serve as an intermediate physiologic endpoint helping to establish optimal MBT treatment parameters. The purpose of this review was to summarize the current literature investigating the short-term effect of a single dose of MBT on pain sensitivity in both healthy and clinical populations, with particular attention to specific MBT parameters of intensity and duration. Methods A systematic search for articles meeting our prespecified criteria was conducted using Cumulative Index to Nursing and Allied Health Literature (CINAHL) and MEDLINE from the inception of each database until July 2012, in accordance with guidelines from the Preferred Reporting Items for Systematic reviews and Meta-Analysis. Relevant characteristics from studies included type, intensity, and duration of MBT and whether short-term changes in pain sensitivity and clinical pain were noted with MBT application. Study results were pooled using a random-effects model to estimate the overall effect size of a single dose of MBT on pain sensitivity as well as the effect of MBT, dependent on comparison group and population type. Results Reports from 24 randomized controlled trials (23 articles) were included, representing 36 MBT treatment arms and 29 comparative groups, where 10 groups received active agents, 11 received sham/inert treatments, and eight received no treatment. MBT demonstrated a favorable and consistent ability to modulate pain sensitivity. Short-term modulation of pain sensitivity was associated with short-term beneficial effects on clinical pain. Intensity of MBT, but not duration, was linked with change in pain sensitivity. A meta-analysis was conducted on 17 studies that assessed the effect of MBT on pressure pain thresholds. The results suggest that MBT had a favorable effect on pressure pain thresholds when compared with no-treatment and sham/inert groups, and effects comparable with those of other active treatments. Conclusion The evidence supports the use of pain sensitivity measures by future research to help elucidate optimal therapeutic parameters for MBT as an intermediate physiologic marker. PMID:23403507

  10. Endobronchial Ultrasound for Nodal Staging of Non-Small Cell Lung Cancer Patients with Radiologically Normal Mediastinum: A Meta-Analysis.

    PubMed

    El-Osta, Hazem; Jani, Pushan; Mansour, Ali; Rascoe, Philip; Jafri, Syed

    2018-04-23

    An accurate assessment of the mediastinal lymph nodes status is essential in the staging and treatment planning of potentially resectable non-small cell lung cancer (NSCLC). We performed this meta-analysis to evaluate the role of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) in detecting occult mediastinal disease in NSCLC with no radiologic mediastinal involvement. The PubMed, Embase, and Cochrane libraries were searched for studies describing the role of EBUS-TBNA in lung cancer patients with radiologically negative mediastinum. The individual and pooled sensitivity, prevalence, negative predictive value (NPV), and diagnostic odds ratio (DOR) were calculated using the random effects model. Metaregression analysis, heterogeneity, and publication bias were also assessed. A total of 13 studies that met the inclusion criteria were included in the meta-analysis. The pooled effect size of the different diagnostic parameters were estimated as follows: prevalence, 12.8% (95% CI, 10.4%-15.7%); sensitivity, 49.5% (95% confidence interval [CI], 36.4%-62.6%); NPV, 93.0% (95% CI, 90.3%-95.0%); and log DOR, 5.069 (95% CI, 4.212-5.925). Significant heterogeneity was noticeable for the sensitivity, disease prevalence, and NPV, but not observed for log DOR. Publication bias was detected for sensitivity, NPV and log DOR but not for prevalence. Bivariate meta-regression analysis showed no significant association between the pooled calculated parameters and the type of anesthesia, imaging utilized to define negative mediastinum, rapid on-site test usage, and presence of bias by QUADAS-2 tool. Interestingly, we observed a greater sensitivity, NPV and log DOR for studies published prior to 2010, and for prospective multicenter studies. Among NSCLC patients with a radiologically normal mediastinum, the prevalence of mediastinal disease is 12.8% and the sensitivity of EBUS-TBNA is 49.5%. Despite the low sensitivity, the resulting NPV of 93.0% for EBUS-TBNA suggests that mediastinal metastasis is uncommon in such patients.

  11. Bringing gender sensitivity into healthcare practice: a systematic review.

    PubMed

    Celik, Halime; Lagro-Janssen, Toine A L M; Widdershoven, Guy G A M; Abma, Tineke A

    2011-08-01

    Despite the body of literature on gender dimensions and disparities between the sexes in health, practical improvements will not be realized effectively as long as we lack an overview of the ways how to implement these ideas. This systematic review provides a content analysis of literature on the implementation of gender sensitivity in health care. Literature was identified from CINAHL, PsycINFO, Medline, EBSCO and Cochrane (1998-2008) and the reference lists of relevant articles. The quality and relevance of 752 articles were assessed and finally 11 original studies were included. Our results demonstrate that the implementation of gender sensitivity includes tailoring opportunities and barriers related to the professional, organizational and the policy level. As gender disparities are embedded in healthcare, a multiple track approach to implement gender sensitivity is needed to change gendered healthcare systems. Conventional approaches, taking into account one barrier and/or opportunity, fail to prevent gender inequality in health care. For gender-sensitive health care we need to change systems and structures, but also to enhance understanding, raise awareness and develop skills among health professionals. To bring gender sensitivity into healthcare practice, interventions should address a range of factors. Copyright © 2010. Published by Elsevier Ireland Ltd.

  12. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  13. Sum over Histories Representation for Kinetic Sensitivity Analysis: How Chemical Pathways Change When Reaction Rate Coefficients Are Varied

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bai, Shirong; Davis, Michael J.; Skodje, Rex T.

    2015-11-12

    The sensitivity of kinetic observables is analyzed using a newly developed sum over histories representation of chemical kinetics. In the sum over histories representation, the concentrations of the chemical species are decomposed into the sum of probabilities for chemical pathways that follow molecules from reactants to products or intermediates. Unlike static flux methods for reaction path analysis, the sum over histories approach includes the explicit time dependence of the pathway probabilities. Using the sum over histories representation, the sensitivity of an observable with respect to a kinetic parameter such as a rate coefficient is then analyzed in terms of howmore » that parameter affects the chemical pathway probabilities. The method is illustrated for species concentration target functions in H-2 combustion where the rate coefficients are allowed to vary over their associated uncertainty ranges. It is found that large sensitivities are often associated with rate limiting steps along important chemical pathways or by reactions that control the branching of reactive flux« less

  14. Sensitivity analysis of reactive ecological dynamics.

    PubMed

    Verdy, Ariane; Caswell, Hal

    2008-08-01

    Ecological systems with asymptotically stable equilibria may exhibit significant transient dynamics following perturbations. In some cases, these transient dynamics include the possibility of excursions away from the equilibrium before the eventual return; systems that exhibit such amplification of perturbations are called reactive. Reactivity is a common property of ecological systems, and the amplification can be large and long-lasting. The transient response of a reactive ecosystem depends on the parameters of the underlying model. To investigate this dependence, we develop sensitivity analyses for indices of transient dynamics (reactivity, the amplification envelope, and the optimal perturbation) in both continuous- and discrete-time models written in matrix form. The sensitivity calculations require expressions, some of them new, for the derivatives of equilibria, eigenvalues, singular values, and singular vectors, obtained using matrix calculus. Sensitivity analysis provides a quantitative framework for investigating the mechanisms leading to transient growth. We apply the methodology to a predator-prey model and a size-structured food web model. The results suggest predator-driven and prey-driven mechanisms for transient amplification resulting from multispecies interactions.

  15. Surrogate-based Analysis and Optimization

    NASA Technical Reports Server (NTRS)

    Queipo, Nestor V.; Haftka, Raphael T.; Shyy, Wei; Goel, Tushar; Vaidyanathan, Raj; Tucker, P. Kevin

    2005-01-01

    A major challenge to the successful full-scale development of modem aerospace systems is to address competing objectives such as improved performance, reduced costs, and enhanced safety. Accurate, high-fidelity models are typically time consuming and computationally expensive. Furthermore, informed decisions should be made with an understanding of the impact (global sensitivity) of the design variables on the different objectives. In this context, the so-called surrogate-based approach for analysis and optimization can play a very valuable role. The surrogates are constructed using data drawn from high-fidelity models, and provide fast approximations of the objectives and constraints at new design points, thereby making sensitivity and optimization studies feasible. This paper provides a comprehensive discussion of the fundamental issues that arise in surrogate-based analysis and optimization (SBAO), highlighting concepts, methods, techniques, as well as practical implications. The issues addressed include the selection of the loss function and regularization criteria for constructing the surrogates, design of experiments, surrogate selection and construction, sensitivity analysis, convergence, and optimization. The multi-objective optimal design of a liquid rocket injector is presented to highlight the state of the art and to help guide future efforts.

  16. SCALE Code System 6.2.2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rearden, Bradley T.; Jessee, Matthew Anderson

    The SCALE Code System is a widely used modeling and simulation suite for nuclear safety analysis and design that is developed, maintained, tested, and managed by the Reactor and Nuclear Systems Division (RNSD) of Oak Ridge National Laboratory (ORNL). SCALE provides a comprehensive, verified and validated, user-friendly tool set for criticality safety, reactor physics, radiation shielding, radioactive source term characterization, and sensitivity and uncertainty analysis. Since 1980, regulators, licensees, and research institutions around the world have used SCALE for safety analysis and design. SCALE provides an integrated framework with dozens of computational modules including 3 deterministic and 3 Monte Carlomore » radiation transport solvers that are selected based on the desired solution strategy. SCALE includes current nuclear data libraries and problem-dependent processing tools for continuous-energy (CE) and multigroup (MG) neutronics and coupled neutron-gamma calculations, as well as activation, depletion, and decay calculations. SCALE includes unique capabilities for automated variance reduction for shielding calculations, as well as sensitivity and uncertainty analysis. SCALE’s graphical user interfaces assist with accurate system modeling, visualization of nuclear data, and convenient access to desired results. SCALE 6.2 represents one of the most comprehensive revisions in the history of SCALE, providing several new capabilities and significant improvements in many existing features.« less

  17. Systematic Review and Meta-Analysis of Studies Evaluating Diagnostic Test Accuracy: A Practical Review for Clinical Researchers-Part II. Statistical Methods of Meta-Analysis

    PubMed Central

    Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi

    2015-01-01

    Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107

  18. Mechanical performance and parameter sensitivity analysis of 3D braided composites joints.

    PubMed

    Wu, Yue; Nan, Bo; Chen, Liang

    2014-01-01

    3D braided composite joints are the important components in CFRP truss, which have significant influence on the reliability and lightweight of structures. To investigate the mechanical performance of 3D braided composite joints, a numerical method based on the microscopic mechanics is put forward, the modeling technologies, including the material constants selection, element type, grid size, and the boundary conditions, are discussed in detail. Secondly, a method for determination of ultimate bearing capacity is established, which can consider the strength failure. Finally, the effect of load parameters, geometric parameters, and process parameters on the ultimate bearing capacity of joints is analyzed by the global sensitivity analysis method. The results show that the main pipe diameter thickness ratio γ, the main pipe diameter D, and the braided angle α are sensitive to the ultimate bearing capacity N.

  19. Climate Risk Assessment: Technical Guidance Manual for DoD Installations and Built Environment

    DTIC Science & Technology

    2016-09-06

    climate change risks to DoD installations and the built environment. The approach, which we call “decision-scaling,” reveals the core sensitivity of...DoD installations to climate change . It is designed to illuminate the sensitivity of installations and their supporting infrastructure systems...including water and energy, to climate changes and other uncertainties without dependence on climate change projections. In this way the analysis and

  20. Tourniquet Test for Dengue Diagnosis: Systematic Review and Meta-analysis of Diagnostic Test Accuracy.

    PubMed

    Grande, Antonio Jose; Reid, Hamish; Thomas, Emma; Foster, Charlie; Darton, Thomas C

    2016-08-01

    Dengue fever is a ubiquitous arboviral infection in tropical and sub-tropical regions, whose incidence has increased over recent decades. In the absence of a rapid point of care test, the clinical diagnosis of dengue is complex. The World Health Organisation has outlined diagnostic criteria for making the diagnosis of dengue infection, which includes the use of the tourniquet test (TT). To assess the quality of the evidence supporting the use of the TT and perform a diagnostic accuracy meta-analysis comparing the TT to antibody response measured by ELISA. A comprehensive literature search was conducted in the following databases to April, 2016: MEDLINE (PubMed), EMBASE, Cochrane Central Register of Controlled Trials, BIOSIS, Web of Science, SCOPUS. Studies comparing the diagnostic accuracy of the tourniquet test with ELISA for the diagnosis of dengue were included. Two independent authors extracted data using a standardized form. A total of 16 studies with 28,739 participants were included in the meta-analysis. Pooled sensitivity for dengue diagnosis by TT was 58% (95% Confidence Interval (CI), 43%-71%) and the specificity was 71% (95% CI, 60%-80%). In the subgroup analysis sensitivity for non-severe dengue diagnosis was 55% (95% CI, 52%-59%) and the specificity was 63% (95% CI, 60%-66%), whilst sensitivity for dengue hemorrhagic fever diagnosis was 62% (95% CI, 53%-71%) and the specificity was 60% (95% CI, 48%-70%). Receiver-operator characteristics demonstrated a test accuracy (AUC) of 0.70 (95% CI, 0.66-0.74). The tourniquet test is widely used in resource poor settings despite currently available evidence demonstrating only a marginal benefit in making a diagnosis of dengue infection alone. The protocol for this systematic review was registered at CRD42015020323.

  1. Is the Timed Up and Go test a useful predictor of risk of falls in community dwelling older adults: a systematic review and meta- analysis

    PubMed Central

    2014-01-01

    Background The Timed Up and Go test (TUG) is a commonly used screening tool to assist clinicians to identify patients at risk of falling. The purpose of this systematic review and meta-analysis is to determine the overall predictive value of the TUG in community-dwelling older adults. Methods A literature search was performed to identify all studies that validated the TUG test. The methodological quality of the selected studies was assessed using the QUADAS-2 tool, a validated tool for the quality assessment of diagnostic accuracy studies. A TUG score of ≥13.5 seconds was used to identify individuals at higher risk of falling. All included studies were combined using a bivariate random effects model to generate pooled estimates of sensitivity and specificity at ≥13.5 seconds. Heterogeneity was assessed using the variance of logit transformed sensitivity and specificity. Results Twenty-five studies were included in the systematic review and 10 studies were included in meta-analysis. The TUG test was found to be more useful at ruling in rather than ruling out falls in individuals classified as high risk (>13.5 sec), with a higher pooled specificity (0.74, 95% CI 0.52-0.88) than sensitivity (0.31, 95% CI 0.13-0.57). Logistic regression analysis indicated that the TUG score is not a significant predictor of falls (OR = 1.01, 95% CI 1.00-1.02, p = 0.05). Conclusion The Timed Up and Go test has limited ability to predict falls in community dwelling elderly and should not be used in isolation to identify individuals at high risk of falls in this setting. PMID:24484314

  2. Assessment of nodal involvement in non-small-cell lung cancer with 18F-FDG-PET/CT: mediastinal blood pool cut-off has the highest sensitivity and tumour SUVmax/2 has the highest specificity.

    PubMed

    Mallorie, Amy; Goldring, James; Patel, Anant; Lim, Eric; Wagner, Thomas

    2017-08-01

    Lymph node involvement in non-small-cell lung cancer (NSCLC) is a major factor in determining management and prognosis. We aimed to evaluate the accuracy of fluorine-18-fluorodeoxyglucose-PET/computed tomography (CT) for the assessment of nodal involvement in patients with NSCLC. In this retrospective study, we included 61 patients with suspected or confirmed resectable NSCLC over a 2-year period from April 2013 to April 2015. 221 nodes with pathological staging from surgery or endobronchial ultrasound-guided transbronchial needle aspiration were assessed using a nodal station-based analysis with original clinical reports and three different cut-offs: mediastinal blood pool (MBP), liver background and tumour standardized uptake value maximal (SUVmax)/2. Using nodal station-based analysis for activity more than tumour SUVmax/2, the sensitivity was 45%, the specificity was 89% and the negative predictive value (NPV) was 87%. For activity more than MBP, the sensitivity was 93%, the specificity was 72% and NPV was 98%. For activity more than liver background, the sensitivity was 83%, the specificity was 84% and NPV was 96%. Using a nodal staging-based analysis for accuracy at detecting N2/3 disease, for activity more than tumour SUVmax/2, the sensitivity was 59%, the specificity was 85% and NPV was 80%. For activity more than MBP, the sensitivity was 95%, the specificity was 61% and NPV was 96%. For activity more than liver background, the sensitivity was 86%, the specificity was 81% and NPV was 92%. Receiver-operating characteristic analysis showed the optimal nodal SUVmax to be more than 6.4 with a sensitivity of 45% and a specificity of 95%, with an area under the curve of 0.85. Activity more than MBP was the most sensitive cut-off with the highest sensitivity and NPV. Activity more than primary tumour SUVmax/2 was the most specific cut-off. Nodal SUVmax more than 6.4 has a high specificity of 95%.

  3. Photoacoustic Spectroscopy Analysis of Traditional Chinese Medicine

    NASA Astrophysics Data System (ADS)

    Chen, Lu; Zhao, Bin-xing; Xiao, Hong-tao; Tong, Rong-sheng; Gao, Chun-ming

    2013-09-01

    Chinese medicine is a historic cultural legacy of China. It has made a significant contribution to medicine and healthcare for generations. The development of Chinese herbal medicine analysis is emphasized by the Chinese pharmaceutical industry. This study has carried out the experimental analysis of ten kinds of Chinese herbal powder including Fritillaria powder, etc., based on the photoacoustic spectroscopy (PAS) method. First, a photoacoustic spectroscopy system was designed and constructed, especially a highly sensitive solid photoacoustic cell was established. Second, the experimental setup was verified through the characteristic emission spectrum of the light source, obtained by using carbon as a sample in the photoacoustic cell. Finally, as the photoacoustic spectroscopy analysis of Fritillaria, etc., was completed, the specificity of the Chinese herb medicine analysis was verified. This study shows that the PAS can provide a valid, highly sensitive analytical method for the specificity of Chinese herb medicine without preparing and damaging samples.

  4. A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia

    NASA Astrophysics Data System (ADS)

    Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM

    2017-07-01

    Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.

  5. Resolution of VTI anisotropy with elastic full-waveform inversion: theory and basic numerical examples

    NASA Astrophysics Data System (ADS)

    Podgornova, O.; Leaney, S.; Liang, L.

    2018-07-01

    Extracting medium properties from seismic data faces some limitations due to the finite frequency content of the data and restricted spatial positions of the sources and receivers. Some distributions of the medium properties make low impact on the data (including none). If these properties are used as the inversion parameters, then the inverse problem becomes overparametrized, leading to ambiguous results. We present an analysis of multiparameter resolution for the linearized inverse problem in the framework of elastic full-waveform inversion. We show that the spatial and multiparameter sensitivities are intertwined and non-sensitive properties are spatial distributions of some non-trivial combinations of the conventional elastic parameters. The analysis accounts for the Hessian information and frequency content of the data; it is semi-analytical (in some scenarios analytical), easy to interpret and enhances results of the widely used radiation pattern analysis. Single-type scattering is shown to have limited sensitivity, even for full-aperture data. Finite-frequency data lose multiparameter sensitivity at smooth and fine spatial scales. Also, we establish ways to quantify a spatial-multiparameter coupling and demonstrate that the theoretical predictions agree well with the numerical results.

  6. The effect of alcohol consumption on insulin sensitivity and glycemic status: a systematic review and meta-analysis of intervention studies.

    PubMed

    Schrieks, Ilse C; Heil, Annelijn L J; Hendriks, Henk F J; Mukamal, Kenneth J; Beulens, Joline W J

    2015-04-01

    Moderate alcohol consumption is associated with a reduced risk of type 2 diabetes. This reduced risk might be explained by improved insulin sensitivity or improved glycemic status, but results of intervention studies on this relation are inconsistent. The purpose of this study was to conduct a systematic review and meta-analysis of intervention studies investigating the effect of alcohol consumption on insulin sensitivity and glycemic status. PubMed and Embase were searched up to August 2014. Intervention studies on the effect of alcohol consumption on biological markers of insulin sensitivity or glycemic status of at least 2 weeks' duration were included. Investigators extracted data on study characteristics, outcome measures, and methodological quality. Fourteen intervention studies were included in a meta-analysis of six glycemic end points. Alcohol consumption did not influence estimated insulin sensitivity (standardized mean difference [SMD] 0.08 [-0.09 to 0.24]) or fasting glucose (SMD 0.07 [-0.11 to 0.24]) but reduced HbA1c (SMD -0.62 [-1.01 to -0.23]) and fasting insulin concentrations (SMD -0.19 [-0.35 to -0.02]) compared with the control condition. Alcohol consumption among women reduced fasting insulin (SMD -0.23 [-0.41 to -0.04]) and tended to improve insulin sensitivity (SMD 0.16 [-0.04 to 0.37]) but not among men. Results were similar after excluding studies with high alcohol dosages (>40 g/day) and were not influenced by dosage and duration of the intervention. Although the studies had small sample sizes and were of short duration, the current evidence suggests that moderate alcohol consumption may decrease fasting insulin and HbA1c concentrations among nondiabetic subjects. Alcohol consumption might improve insulin sensitivity among women but did not do so overall. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  7. Sensitivity analysis of discrete structural systems: A survey

    NASA Technical Reports Server (NTRS)

    Adelman, H. M.; Haftka, R. T.

    1984-01-01

    Methods for calculating sensitivity derivatives for discrete structural systems are surveyed, primarily covering literature published during the past two decades. Methods are described for calculating derivatives of static displacements and stresses, eigenvalues and eigenvectors, transient structural response, and derivatives of optimum structural designs with respect to problem parameters. The survey is focused on publications addressed to structural analysis, but also includes a number of methods developed in nonstructural fields such as electronics, controls, and physical chemistry which are directly applicable to structural problems. Most notable among the nonstructural-based methods are the adjoint variable technique from control theory, and the Green's function and FAST methods from physical chemistry.

  8. Implementation of structural response sensitivity calculations in a large-scale finite-element analysis system

    NASA Technical Reports Server (NTRS)

    Giles, G. L.; Rogers, J. L., Jr.

    1982-01-01

    The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.

  9. Are quantitative sensitivity analysis methods always reliable?

    NASA Astrophysics Data System (ADS)

    Huang, X.

    2016-12-01

    Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.

  10. Results and lessons learned from MODIS polarization sensitivity characterization

    NASA Astrophysics Data System (ADS)

    Sun, J.; Xiong, X.; Wang, X.; Qiu, S.; Xiong, S.; Waluschka, E.

    2006-08-01

    In addition to radiometric, spatial, and spectral calibration requirements, MODIS design specifications include polarization sensitivity requirements of less than 2% for all Reflective Solar Bands (RSB) except for the band centered at 412nm. To the best of our knowledge, MODIS was the first imaging radiometer that went through comprehensive system level (end-to-end) polarization characterization. MODIS polarization sensitivity was measured pre-launch at a number of sensor view angles using a laboratory Polarization Source Assembly (PSA) that consists of a rotatable source, a polarizer (Ahrens prism design), and a collimator. This paper describes MODIS polarization characterization approaches used by MODIS Characterization Support Team (MCST) at NASA/GSFC and addresses issues and concerns in the measurements. Results (polarization factor and phase angle) using different analyzing methods are discussed. Also included in this paper is a polarization characterization comparison between Terra and Aqua MODIS. Our previous and recent analysis of MODIS RSB polarization sensitivity could provide useful information for future Earth-observing sensor design, development, and characterization.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonen, E.P.; Johnson, K.I.; Simonen, F.A.

    The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less

  12. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Association between atopic dermatitis and contact sensitization: A systematic review and meta-analysis.

    PubMed

    Hamann, Carsten R; Hamann, Dathan; Egeberg, Alexander; Johansen, Jeanne D; Silverberg, Jonathan; Thyssen, Jacob P

    2017-07-01

    It is unclear whether patients with atopic dermatitis (AD) have an altered prevalence or risk for contact sensitization. Increased exposure to chemicals in topical products together with impaired skin barrier function suggest a higher risk, whereas the immune profile suggests a lower risk. To perform a systematic review and meta-analysis of the association between AD and contact sensitization. The PubMed/Medline, Embase, and Cochrane databases were searched for articles that reported on contact sensitization in individuals with and without AD. The literature search yielded 10,083 citations; 417 were selected based on title and abstract screening and 74 met inclusion criteria. In a pooled analysis, no significant difference in contact sensitization between AD and controls was evident (random effects model odds ratio [OR] = 0.891; 95% confidence interval [CI] = 0.771-1.03). There was a positive correlation in studies that compared AD patients with individuals from the general population (OR 1.50, 95% CI 1.23-1.93) but an inverse association when comparing with referred populations (OR 0.753, 95% CI 0.63-0.90). Included studies used different tools to diagnose AD and did not always provide information on current or past disease. Patch test allergens varied between studies. No overall relationship between AD and contact sensitization was found. We recommend that clinicians consider patch testing AD patients when allergic contact dermatitis is suspected. Copyright © 2017 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved.

  14. Analysis of Factors Influencing Diagnostic Accuracy of T-SPOT.TB for Active Tuberculosis in Clinical Practice.

    PubMed

    Zhang, Lifan; Shi, Xiaochun; Zhang, Yueqiu; Zhang, Yao; Huo, Feifei; Zhou, Baotong; Deng, Guohua; Liu, Xiaoqing

    2017-08-10

    T-SPOT.TB didn't perform a perfect diagnosis for active tuberculosis (ATB), and some factors may influence the results. We did this study to evaluate possible factors associated with the sensitivity and specificity of T-SPOT.TB, and the diagnostic parameters under varied conditions. Patients with suspected ATB were enrolled prospectively. Influencing factors of the sensitivity and specificity of T-SPOT.TB were evaluated using logistic regression models. Sensitivity, specificity, predictive values (PV), and likelihood ratios (LR) were calculated with consideration of relevant factors. Of the 865 participants, 205 (23.7%) had ATB, including 58 (28.3%) microbiologically confirmed TB and 147 (71.7%) clinically diagnosed TB. 615 (71.7%) were non-TB. 45 (5.2%) cases were clinically indeterminate and excluded from the final analysis. In multivariate analysis, serous effusion was the only independent risk factor related to lower sensitivity (OR = 0.39, 95% CI: 0.18-0.81) among patients with ATB. Among non-TB patients, age, TB history, immunosuppressive agents/glucocorticoid treatment and lymphocyte count were the independent risk factors related to specificity of T-SPOT.TB. Sensitivity, specificity, PV+, PV-, LR+ and LR- of T-SPOT.TB for diagnosis of ATB were 78.5%, 74.1%, 50.3%, 91.2%, 3.0 and 0.3, respectively. This study suggests that influencing factors of sensitivity and specificity of T-SPOT.TB should be considered for interpretation of T-SPOT.TB results.

  15. Approaches to answering critical CER questions.

    PubMed

    Kinnier, Christine V; Chung, Jeanette W; Bilimoria, Karl Y

    2015-01-01

    While randomized controlled trials (RCTs) are the gold standard for research, many research questions cannot be ethically and practically answered using an RCT. Comparative effectiveness research (CER) techniques are often better suited than RCTs to address the effects of an intervention under routine care conditions, an outcome otherwise known as effectiveness. CER research techniques covered in this section include: effectiveness-oriented experimental studies such as pragmatic trials and cluster randomized trials, treatment response heterogeneity, observational and database studies including adjustment techniques such as sensitivity analysis and propensity score analysis, systematic reviews and meta-analysis, decision analysis, and cost effectiveness analysis. Each section describes the technique and covers the strengths and weaknesses of the approach.

  16. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics.

    PubMed

    Arampatzis, Georgios; Katsoulakis, Markos A; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  17. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    NASA Astrophysics Data System (ADS)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-01

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systems with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.

  18. Efficient estimators for likelihood ratio sensitivity indices of complex stochastic dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arampatzis, Georgios; Katsoulakis, Markos A.; Rey-Bellet, Luc

    2016-03-14

    We demonstrate that centered likelihood ratio estimators for the sensitivity indices of complex stochastic dynamics are highly efficient with low, constant in time variance and consequently they are suitable for sensitivity analysis in long-time and steady-state regimes. These estimators rely on a new covariance formulation of the likelihood ratio that includes as a submatrix a Fisher information matrix for stochastic dynamics and can also be used for fast screening of insensitive parameters and parameter combinations. The proposed methods are applicable to broad classes of stochastic dynamics such as chemical reaction networks, Langevin-type equations and stochastic models in finance, including systemsmore » with a high dimensional parameter space and/or disparate decorrelation times between different observables. Furthermore, they are simple to implement as a standard observable in any existing simulation algorithm without additional modifications.« less

  19. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  20. Diagnosis of tuberculosis pleurisy with adenosine deaminase (ADA): a systematic review and meta-analysis.

    PubMed

    Gui, Xuwei; Xiao, Heping

    2014-01-01

    This systematic review and meta-analysis was performed to determine accuracy and usefulness of adenosine deaminase (ADA) in diagnosis of tuberculosis pleurisy. Medline, Google scholar and Web of Science databases were searched to identify related studies until 2014. Two reviewers independently assessed quality of studies included according to standard Quality Assessment of Diagnosis Accuracy Studies (QUADAS) criteria. The sensitivity, specificity, diagnostic odds ratio and other parameters of ADA in diagnosis of tuberculosis pleurisy were analyzed with Meta-DiSC1.4 software, and pooled using the random effects model. Twelve studies including 865 tuberculosis pleurisy patients and 1379 non-tuberculosis pleurisy subjects were identified from 110 studies for this meta-analysis. The sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR) and diagnosis odds ratio (DOR) of ADA in the diagnosis of tuberculosis pleurisy were 45.25 (95% CI 27.63-74.08), 0.86 (95% CI 0.84-0.88), 0.88 (95% CI 0.86-0.90), 6.32 (95% CI 4.83-8.26) and 0.15 (95% 0.11-0.22), respectively. The area under the summary receiver operating characteristic curve (SROC) was 0.9340. Our results demonstrate that the sensitivity and specificity of ADA are high in the diagnosis of tuberculosis pleurisy especially when ADA≥50 (U/L). Thus, ADA is a relatively sensitive and specific marker for tuberculosis pleurisy diagnosis. However, it is cautious to apply these results due to the heterogeneity in study design of these studies. Further studies are required to confirm the optimal cut-off value of ADA.

  1. Sensitivity and Specificity of Eustachian Tube Function Tests in Adults

    PubMed Central

    Doyle, William J.; Swarts, J. Douglas; Banks, Julianne; Casselbrant, Margaretha L; Mandel, Ellen M; Alper, Cuneyt M.

    2013-01-01

    Objective Determine if Eustachian Tube (ET) function (ETF) tests can identify ears with physician-diagnosed ET dysfunction (ETD) in a mixed population at high sensitivity and specificity and define the inter-relatedness of ETF test parameters. Methods ETF was evaluated using the Forced-Response, Inflation-Deflation, Valsalva and Sniffing tests in 15 control ears of adult subjects after unilateral myringotomy (Group I) and in 23 ears of 19 adult subjects with ventilation tubes inserted for ETD (Group II). Data were analyzed using logistic regression including each parameter independently and then a step-down Discriminant Analysis including all ETF test parameters to predict group assignment. Factor Analysis operating over all parameters was used to explore relatedness. Results The Discriminant Analysis identified 4 ETF test parameters (Valsalva, ET opening pressure, dilatory efficiency and % positive pressure equilibrated) that together correctly assigned ears to Group II at a sensitivity of 95% and a specificity of 83%. Individual parameters representing the efficiency of ET opening during swallowing showed moderately accurate assignments of ears to their respective groups. Three factors captured approximately 98% of the variance among parameters, the first had negative loadings of the ETF structural parameters, the second had positive loadings of the muscle-assisted ET opening parameters and the third had negative loadings of the muscle-assisted ET opening parameters and positive loadings of the structural parameters. Discussion These results show that ETF tests can correctly assign individual ears to physician-diagnosed ETD with high sensitivity and specificity and that ETF test parameters can be grouped into structural-functional categories. PMID:23868429

  2. Rock-dwelling lizards exhibit less sensitivity of sprint speed to increases in substrate rugosity.

    PubMed

    Collins, Clint E; Self, Jessica D; Anderson, Roger A; McBrayer, Lance D

    2013-06-01

    Effectively moving across variable substrates is important to all terrestrial animals. The effects of substrates on lizard performance have ecological ramifications including the partitioning of habitat according to sprinting ability on different surfaces. This phenomenon is known as sprint sensitivity, or the decrease in sprint speed due to change in substrate. However, sprint sensitivity has been characterized only in arboreal Anolis lizards. Our study measured sensitivity to substrate rugosity among six lizard species that occupy rocky, sandy, and/or arboreal habitats. Lizards that use rocky habitats are less sensitive to changes in substrate rugosity, followed by arboreal lizards, and then by lizards that use sandy habitats. We infer from comparative phylogenetic analysis that forelimb, chest, and tail dimensions are important external morphological features related to sensitivity to changes in substrate rugosity. Copyright © 2013 Elsevier GmbH. All rights reserved.

  3. Detection of biological particles by the use of circular dichroism measurements improved by scattering theory

    NASA Astrophysics Data System (ADS)

    Rosen, David L.; Pendleton, J. David

    1995-09-01

    Light scattered from optically active spheres was theoretically analyzed for biodetection. The circularly polarized signal of near-forward scattering from circularly dichroic spheres was calculated. Both remote and point biodetection were considered. The analysis included the effect of a circular aperture and beam block at the detector. If the incident light is linearly polarized, a false signal would limit the sensitivity of the biodetector. If the incident light is randomly polarized, shot noise would limit the sensitivity. Suggested improvements to current techniques include a beam block, precise angular measurements, randomly polarized light, index-matching fluid, and larger apertures for large particles.

  4. Assessment of cognitive safety in clinical drug development

    PubMed Central

    Roiser, Jonathan P.; Nathan, Pradeep J.; Mander, Adrian P.; Adusei, Gabriel; Zavitz, Kenton H.; Blackwell, Andrew D.

    2016-01-01

    Cognitive impairment is increasingly recognised as an important potential adverse effect of medication. However, many drug development programmes do not incorporate sensitive cognitive measurements. Here, we review the rationale for cognitive safety assessment, and explain several basic methodological principles for measuring cognition during clinical drug development, including study design and statistical analysis, from Phase I through to postmarketing. The crucial issue of how cognition should be assessed is emphasized, especially the sensitivity of measurement. We also consider how best to interpret the magnitude of any identified effects, including comparison with benchmarks. We conclude by discussing strategies for the effective communication of cognitive risks. PMID:26610416

  5. Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy

    PubMed Central

    Cook, Michael J; Puri, Basant K

    2016-01-01

    The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571

  6. AIAA/USAF/NASA/OAI Symposium on Multidisciplinary Analysis and Optimization, 4th, Cleveland, OH, Sept. 21-23, 1992, Technical Papers. Pts. 1 & 2

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The papers presented at the symposium cover aerodynamics, design applications, propulsion systems, high-speed flight, structures, controls, sensitivity analysis, optimization algorithms, and space structures applications. Other topics include helicopter rotor design, artificial intelligence/neural nets, and computational aspects of optimization. Papers are included on flutter calculations for a system with interacting nonlinearities, optimization in solid rocket booster application, improving the efficiency of aerodynamic shape optimization procedures, nonlinear control theory, and probabilistic structural analysis of space truss structures for nonuniform thermal environmental effects.

  7. Radiolysis Model Sensitivity Analysis for a Used Fuel Storage Canister

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wittman, Richard S.

    2013-09-20

    This report fulfills the M3 milestone (M3FT-13PN0810027) to report on a radiolysis computer model analysis that estimates the generation of radiolytic products for a storage canister. The analysis considers radiolysis outside storage canister walls and within the canister fill gas over a possible 300-year lifetime. Previous work relied on estimates based directly on a water radiolysis G-value. This work also includes that effect with the addition of coupled kinetics for 111 reactions for 40 gas species to account for radiolytic-induced chemistry, which includes water recombination and reactions with air.

  8. 15 CFR Supplement No. 6 to Part 774 - Sensitive List

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... filtering and beamforming using Fast Fourier or other transforms or processes. (vi) 6A001.a.2.d. (vii) 6A001... processing and correlation, including spectral analysis, digital filtering and beamforming using Fast Fourier...

  9. MAMMALIAN DNA IN PCR REAGENTS

    EPA Science Inventory

    Ancient DNA analysis is becoming widespread. These studies use polymerase chain reaction (PCR) to amplify minute quantities of heavily damaged template. Unusual steps are taken to achieve the sensitivity necessary to detect ancient DNA, including high- cycle PCR amplification t...

  10. Image encryption based on a delayed fractional-order chaotic logistic system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na

    2012-05-01

    A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.

  11. Sensitive glow discharge ion source for aerosol and gas analysis

    DOEpatents

    Reilly, Peter T. A. [Knoxville, TN

    2007-08-14

    A high sensitivity glow discharge ion source system for analyzing particles includes an aerodynamic lens having a plurality of constrictions for receiving an aerosol including at least one analyte particle in a carrier gas and focusing the analyte particles into a collimated particle beam. A separator separates the carrier gas from the analyte particle beam, wherein the analyte particle beam or vapors derived from the analyte particle beam are selectively transmitted out of from the separator. A glow discharge ionization source includes a discharge chamber having an entrance orifice for receiving the analyte particle beam or analyte vapors, and a target electrode and discharge electrode therein. An electric field applied between the target electrode and discharge electrode generates an analyte ion stream from the analyte vapors, which is directed out of the discharge chamber through an exit orifice, such as to a mass spectrometer. High analyte sensitivity is obtained by pumping the discharge chamber exclusively through the exit orifice and the entrance orifice.

  12. Preoperative localization strategies for primary hyperparathyroidism: an economic analysis.

    PubMed

    Lubitz, Carrie C; Stephen, Antonia E; Hodin, Richard A; Pandharipande, Pari

    2012-12-01

    Strategies for localizing parathyroid pathology preoperatively vary in cost and accuracy. Our purpose was to compute and compare comprehensive costs associated with common localization strategies. A decision-analytic model was developed to evaluate comprehensive, short-term costs of parathyroid localization strategies for patients with primary hyperparathyroidism. Eight strategies were compared. Probabilities of accurate localization were extracted from the literature, and costs associated with each strategy were based on 2011 Medicare reimbursement schedules. Differential cost considerations included outpatient versus inpatient surgeries, operative time, and costs of imaging. Sensitivity analyses were performed to determine effects of variability in key model parameters upon model results. Ultrasound (US) followed by 4D-CT was the least expensive strategy ($5,901), followed by US alone ($6,028), and 4D-CT alone ($6,110). Strategies including sestamibi (SM) were more expensive, with associated expenditures of up to $6,329 for contemporaneous US and SM. Four-gland, bilateral neck exploration (BNE) was the most expensive strategy ($6,824). Differences in cost were dependent upon differences in the sensitivity of each strategy for detecting single-gland disease, which determined the proportion of patients able to undergo outpatient minimally invasive parathyroidectomy. In sensitivity analysis, US alone was preferred over US followed by 4D-CT only when both the sensitivity of US alone for detecting an adenoma was ≥ 94 %, and the sensitivity of 4D-CT following negative US was ≤ 39 %. 4D-CT alone was the least costly strategy when US sensitivity was ≤ 31 %. Among commonly used strategies for preoperative localization of parathyroid pathology, US followed by selective 4D-CT is the least expensive.

  13. Depression Case Finding in Individuals with Dementia: A Systematic Review and Meta-Analysis.

    PubMed

    Goodarzi, Zahra S; Mele, Bria S; Roberts, Derek J; Holroyd-Leduc, Jayna

    2017-05-01

    To compare the diagnostic accuracy of depression case finding tools with a criterion standard in the outpatient setting among adults with dementia. Systematic review and meta-analysis. Studies of older outpatients with dementia. Elderly outpatients (clinic and long-term care) with dementia (N = 3,035). Prevalence of major depression and diagnostic accuracy measures including sensitivity, specificity, and likelihood ratios. From the 11,539 citations, 20 studies were included for qualitative synthesis and 15 for a meta-analysis. Tools included were the Montgomery Åsberg Depression Rating Scale, Cornell Scale for Depression in Dementia (CSDD), Geriatric Depression Scale (GDS), Center for Epidemiologic Studies Depression Scale (CES-D), Hamilton Depression Rating Scale (HDRS), Single Question, Nijmegen Observer-Rated Depression Scale, and Even Briefer Assessment Scale-Depression. The pooled prevalence of depression in individuals with dementia was 30.3% (95% CI = 22.1-38.5). The average age was 75.2 (95% CI = 71.7-78.7), and mean Mini-Mental State Examination scores ranged from 11.2 to 24. The diagnostic accuracy of the individual tools was pooled for the best-reported cutoffs and for each cutoff, if available. The CSDD had a sensitivity of 0.84 (95% CI = 0.73-0.91) and a specificity of 0.80 (95% CI = 0.65-0.90), the 30-item GDS (GDS-30) had a sensitivity of 0.62 (95% CI = 0.45-0.76) and a specificity 0.81 (95% CI = 0.75-0.85), and the HDRS had a sensitivity of 0.86 (95% CI = 0.63-0.96) and a specificity of 0.84 (95% CI = 0.76-0.90). Summary statistics for all tools across best-reported cutoffs had significant heterogeneity. There are many validated tools for the detection of depression in individuals with dementia. Tools that incorporate a physician interview with patient and collateral histories, the CSDD and HDRS, have higher sensitivities, which would ensure fewer false-negatives. © 2017, Copyright the Authors Journal compilation © 2017, The American Geriatrics Society.

  14. Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism.

    PubMed

    Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F

    2017-11-01

    The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. NCT00986154. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  15. IgE sensitization in relation to preschool eczema and filaggrin mutation.

    PubMed

    Johansson, Emma Kristin; Bergström, Anna; Kull, Inger; Lind, Tomas; Söderhäll, Cilla; van Hage, Marianne; Wickman, Magnus; Ballardini, Natalia; Wahlgren, Carl-Fredrik

    2017-12-01

    Eczema (atopic dermatitis) is associated with an increased risk of having IgE antibodies. IgE sensitization can occur through an impaired skin barrier. Filaggrin gene (FLG) mutation is associated with eczema and possibly also with IgE sensitization. We sought to explore the longitudinal relation between preschool eczema (PSE), FLG mutation, or both and IgE sensitization in childhood. A total of 3201 children from the BAMSE (Children Allergy Milieu Stockholm Epidemiology) birth cohort recruited from the general population were included. Regular parental questionnaires identified children with eczema. Blood samples were collected at 4, 8, and 16 years of age for analysis of specific IgE. FLG mutation analysis was performed on 1890 of the children. PSE was associated with IgE sensitization to both food allergens and aeroallergens up to age 16 years (overall adjusted odds ratio, 2.30; 95% CI, 2.00-2.66). This association was even stronger among children with persistent PSE. FLG mutation was associated with IgE sensitization to peanut at age 4 years (adjusted odds ratio, 1.88; 95% CI, 1.03-3.44) but not to other allergens up to age 16 years. FLG mutation and PSE were not effect modifiers for the association between IgE sensitization and PSE or FLG mutation, respectively. Sensitized children with PSE were characterized by means of polysensitization, but no other specific IgE sensitization patterns were found. PSE is associated with IgE sensitization to both food allergens and aeroallergens up to 16 years of age. FLG mutation is associated with IgE sensitization to peanut but not to other allergens. Sensitized children with preceding PSE are more often polysensitized. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  16. Introduction to session on materials and structures

    NASA Technical Reports Server (NTRS)

    Vosteen, L. F.

    1978-01-01

    A review was given of the development of composites for aircraft. Supporting base technology and the Aircraft Energy Efficiency Composites Program are included. Specific topics discussed include: (1) environmental effects on materials; (2) material quality and chemical characterization; (3) design and analysis methods; (4) structural durability; (5) impact sensitivity; (6) carbon fiber electrical effects; and (7) composite components.

  17. 77 FR 26009 - CoStar Group, Inc., Lonestar Acquisition Sub, Inc., and LoopNet, Inc.; Analysis of Agreement...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-02

    ... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... between CoStar and Xceligent, Inc. (``Xceligent''), and increasing the likelihood that CoStar will...

  18. A highly sensitive method for analysis of 7-dehydrocholesterol for the study of Smith-Lemli-Opitz syndrome[S

    PubMed Central

    Liu, Wei; Xu, Libin; Lamberson, Connor; Haas, Dorothea; Korade, Zeljka; Porter, Ned A.

    2014-01-01

    We describe a highly sensitive method for the detection of 7-dehydrocholesterol (7-DHC), the biosynthetic precursor of cholesterol, based on its reactivity with 4-phenyl-1,2,4-triazoline-3,5-dione (PTAD) in a Diels-Alder cycloaddition reaction. Samples of biological tissues and fluids with added deuterium-labeled internal standards were derivatized with PTAD and analyzed by LC-MS. This protocol permits fast processing of samples, short chromatography times, and high sensitivity. We applied this method to the analysis of cells, blood, and tissues from several sources, including human plasma. Another innovative aspect of this study is that it provides a reliable and highly reproducible measurement of 7-DHC in 7-dehydrocholesterol reductase (Dhcr7)-HET mouse (a model for Smith-Lemli-Opitz syndrome) samples, showing regional differences in the brain tissue. We found that the levels of 7-DHC are consistently higher in Dhcr7-HET mice than in controls, with the spinal cord and peripheral nerve showing the biggest differences. In addition to 7-DHC, sensitive analysis of desmosterol in tissues and blood was also accomplished with this PTAD method by assaying adducts formed from the PTAD “ene” reaction. The method reported here may provide a highly sensitive and high throughput way to identify at-risk populations having errors in cholesterol biosynthesis. PMID:24259532

  19. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  20. Value and impact factors of multidetector computed tomography in diagnosis of preoperative lymph node metastasis in gastric cancer: A PRISMA-compliant systematic review and meta-analysis.

    PubMed

    Luo, Mingxu; Lv, You; Guo, Xiuyu; Song, Hongmei; Su, Guoqiang; Chen, Bo

    2017-08-01

    Multidetector computed tomography (MDCT) exhibited wide ranges of sensitivities and specificities for lymph node assessment of gastric cancer (GC) in several individual studies. This present meta-analysis was carried out to evaluate the value of MDCT in diagnosis of preoperative lymph node metastasis (LNM) and to explore the impact factors that might explain the heterogeneity of its diagnostic accuracy in GC. A comprehensive search was conducted to collect all the relevant studies about the value of MDCT in assessing LNM of GC within the PubMed, Cochrane library and Embase databases up to Feb 2, 2016. Two investigators independently screened the studies, extracted data, and evaluated the quality of included studies. The sensitivity, specificity, and area under ROC curve (AUC) were pooled to estimate the overall accuracy of MDCT. Meta-regression and subgroup analysis were carried out to identify the possible factors influencing the heterogeneity of the accuracy. A total of 27 studies with 6519 subjects were finally included. Overall, the pooled sensitivity, specificity, and AUC were 0.67 (95% CI: 0.56-0.77), 0.86 (95% CI: 0.81-0.90), and 0.86 (95% CI: 0.83-0.89), respectively. Meta-regression revealed that MDCT section thickness, proportion of serosal invasion, and publication year were the main significant impact factors in sensitivity, and MDCT section thickness, multiplanar reformation (MPR), and reference standard were the main significant impact factors in specificity. After the included studies were divided into 2 groups (Group A: studies with proportion of serosa-invasive GC subjects ≥50%; Group B: studies with proportion of serosa-invasive GC subjects <50%), the pooled sensitivity in Group A was significantly higher than in Group B (0.84 [95% CI: 0.75-0.90] vs 0.55 [95% CI: 0.41-0.68], P < .01). For early gastric cancer (EGC), the pooled sensitivity, specificity, and AUC were 0.34 (95% CI: 0.15-0.61), 0.91 (95% CI: 0.84-0.95), and 0.83 (95% CI: 0.80-0.86), respectively. To summarize, MDCT tends to be adequate to assess preoperative LNM in serosa-invasive GC, but insufficient for non-serosa-invasive GC (particularly for EGC) owing to its low sensitivity. Proportion of serosa-invasive GC subjects, MDCT section thickness, MPR, and reference standard are the main factors influencing its diagnostic accuracy.

  1. An Investigation of the Dynamic Response of a Seismically Stable Platform

    DTIC Science & Technology

    1982-08-01

    PAD. The controls on the -9system are of two types. A low frequency tilt control, with a 10 arc second sensitivity, 2-axis tiltmeter as sensor ...Inertial Sensors Structural Analysis Holloman AFB, NiM. Support to this effort includes structural analyses toward active servo frequency band. This report...controlled to maintain a null position of a sensitive height sensor . The 6-degree-of- freedom high frequency controls are based on seismometers as sensors

  2. Radiation imaging apparatus

    DOEpatents

    Anger, Hal O.; Martin, Donn C.; Lampton, Michael L.

    1983-01-01

    A radiation imaging system using a charge multiplier and a position sensitive anode in the form of periodically arranged sets of interconnected anode regions for detecting the position of the centroid of a charge cloud arriving thereat from the charge multiplier. Various forms of improved position sensitive anodes having single plane electrode connections are disclosed. Various analog and digital signal processing systems are disclosed, including systems which use the fast response of microchannel plates, anodes and preamps to perform scintillation pulse height analysis digitally.

  3. Adult vector control, mosquito ecology and malaria transmission

    PubMed Central

    Brady, Oliver J.; Godfray, H. Charles J.; Tatem, Andrew J.; Gething, Peter W.; Cohen, Justin M.; McKenzie, F. Ellis; Alex Perkins, T.; Reiner, Robert C.; Tusting, Lucy S.; Scott, Thomas W.; Lindsay, Steven W.; Hay, Simon I.; Smith, David L.

    2015-01-01

    Background Standard advice regarding vector control is to prefer interventions that reduce the lifespan of adult mosquitoes. The basis for this advice is a decades-old sensitivity analysis of ‘vectorial capacity’, a concept relevant for most malaria transmission models and based solely on adult mosquito population dynamics. Recent advances in micro-simulation models offer an opportunity to expand the theory of vectorial capacity to include both adult and juvenile mosquito stages in the model. Methods In this study we revisit arguments about transmission and its sensitivity to mosquito bionomic parameters using an elasticity analysis of developed formulations of vectorial capacity. Results We show that reducing adult survival has effects on both adult and juvenile population size, which are significant for transmission and not accounted for in traditional formulations of vectorial capacity. The elasticity of these effects is dependent on various mosquito population parameters, which we explore. Overall, control is most sensitive to methods that affect adult mosquito mortality rates, followed by blood feeding frequency, human blood feeding habit, and lastly, to adult mosquito population density. Conclusions These results emphasise more strongly than ever the sensitivity of transmission to adult mosquito mortality, but also suggest the high potential of combinations of interventions including larval source management. This must be done with caution, however, as policy requires a more careful consideration of costs, operational difficulties and policy goals in relation to baseline transmission. PMID:25733562

  4. Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations

    NASA Technical Reports Server (NTRS)

    Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.

    2017-01-01

    A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.

  5. Nomograms Predicting Platinum Sensitivity, Progression-Free Survival, and Overall Survival Using Pretreatment Complete Blood Cell Counts in Epithelial Ovarian Cancer

    PubMed Central

    Paik, E Sun; Sohn, Insuk; Baek, Sun-Young; Shim, Minhee; Choi, Hyun Jin; Kim, Tae-Joong; Choi, Chel Hun; Lee, Jeong-Won; Kim, Byoung-Gie; Lee, Yoo-Young; Bae, Duk-Soo

    2017-01-01

    Purpose This study was conducted to evaluate the prognostic significance of pre-treatment complete blood cell count (CBC), including white blood cell (WBC) differential, in epithelial ovarian cancer (EOC) patients with primary debulking surgery (PDS) and to develop nomograms for platinum sensitivity, progression-free survival (PFS), and overall survival (OS). Materials and Methods We retrospectively reviewed the records of 757 patients with EOC whose primary treatment consisted of surgical debulking and chemotherapy at Samsung Medical Center from 2002 to 2012. We subsequently created nomograms for platinum sensitivity, 3-year PFS, and 5-year OS as prediction models for prognostic variables including age, stage, grade, cancer antigen 125 level, residual disease after PDS, and pre-treatment WBC differential counts. The models were then validated by 10-fold cross-validation (CV). Results In addition to stage and residual disease after PDS, which are known predictors, lymphocyte and monocyte count were found to be significant prognostic factors for platinum-sensitivity, platelet count for PFS, and neutrophil count for OS on multivariate analysis. The area under the curves of platinum sensitivity, 3-year PFS, and 5-year OS calculated by the 10-fold CV procedure were 0.7405, 0.8159, and 0.815, respectively. Conclusion Prognostic factors including pre-treatment CBC were used to develop nomograms for platinum sensitivity, 3-year PFS, and 5-year OS of patients with EOC. These nomograms can be used to better estimate individual outcomes. PMID:27669704

  6. Nomograms Predicting Platinum Sensitivity, Progression-Free Survival, and Overall Survival Using Pretreatment Complete Blood Cell Counts in Epithelial Ovarian Cancer.

    PubMed

    Paik, E Sun; Sohn, Insuk; Baek, Sun-Young; Shim, Minhee; Choi, Hyun Jin; Kim, Tae-Joong; Choi, Chel Hun; Lee, Jeong-Won; Kim, Byoung-Gie; Lee, Yoo-Young; Bae, Duk-Soo

    2017-07-01

    This study was conducted to evaluate the prognostic significance of pre-treatment complete blood cell count (CBC), including white blood cell (WBC) differential, in epithelial ovarian cancer (EOC) patients with primary debulking surgery (PDS) and to develop nomograms for platinum sensitivity, progression-free survival (PFS), and overall survival (OS). We retrospectively reviewed the records of 757 patients with EOC whose primary treatment consisted of surgical debulking and chemotherapy at Samsung Medical Center from 2002 to 2012. We subsequently created nomograms for platinum sensitivity, 3-year PFS, and 5-year OS as prediction models for prognostic variables including age, stage, grade, cancer antigen 125 level, residual disease after PDS, and pre-treatment WBC differential counts. The models were then validated by 10-fold cross-validation (CV). In addition to stage and residual disease after PDS, which are known predictors, lymphocyte and monocyte count were found to be significant prognostic factors for platinum-sensitivity, platelet count for PFS, and neutrophil count for OS on multivariate analysis. The area under the curves of platinum sensitivity, 3-year PFS, and 5-year OS calculated by the 10-fold CV procedure were 0.7405, 0.8159, and 0.815, respectively. Prognostic factors including pre-treatment CBC were used to develop nomograms for platinum sensitivity, 3-year PFS, and 5-year OS of patients with EOC. These nomograms can be used to better estimate individual outcomes.

  7. Recent advances on multidimensional liquid chromatography-mass spectrometry for proteomics: from qualitative to quantitative analysis--a review.

    PubMed

    Wu, Qi; Yuan, Huiming; Zhang, Lihua; Zhang, Yukui

    2012-06-20

    With the acceleration of proteome research, increasing attention has been paid to multidimensional liquid chromatography-mass spectrometry (MDLC-MS) due to its high peak capacity and separation efficiency. Recently, many efforts have been put to improve MDLC-based strategies including "top-down" and "bottom-up" to enable highly sensitive qualitative and quantitative analysis of proteins, as well as accelerate the whole analytical procedure. Integrated platforms with combination of sample pretreatment, multidimensional separations and identification were also developed to achieve high throughput and sensitive detection of proteomes, facilitating highly accurate and reproducible quantification. This review summarized the recent advances of such techniques and their applications in qualitative and quantitative analysis of proteomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Sensitivity of BRCA1/2 testing in high-risk breast/ovarian/male breast cancer families: little contribution of comprehensive RNA/NGS panel testing.

    PubMed

    Byers, Helen; Wallis, Yvonne; van Veen, Elke M; Lalloo, Fiona; Reay, Kim; Smith, Philip; Wallace, Andrew J; Bowers, Naomi; Newman, William G; Evans, D Gareth

    2016-11-01

    The sensitivity of testing BRCA1 and BRCA2 remains unresolved as the frequency of deep intronic splicing variants has not been defined in high-risk familial breast/ovarian cancer families. This variant category is reported at significant frequency in other tumour predisposition genes, including NF1 and MSH2. We carried out comprehensive whole gene RNA analysis on 45 high-risk breast/ovary and male breast cancer families with no identified pathogenic variant on exonic sequencing and copy number analysis of BRCA1/2. In addition, we undertook variant screening of a 10-gene high/moderate risk breast/ovarian cancer panel by next-generation sequencing. DNA testing identified the causative variant in 50/56 (89%) breast/ovarian/male breast cancer families with Manchester scores of ≥50 with two variants being confirmed to affect splicing on RNA analysis. RNA sequencing of BRCA1/BRCA2 on 45 individuals from high-risk families identified no deep intronic variants and did not suggest loss of RNA expression as a cause of lost sensitivity. Panel testing in 42 samples identified a known RAD51D variant, a high-risk ATM variant in another breast ovary family and a truncating CHEK2 mutation. Current exonic sequencing and copy number analysis variant detection methods of BRCA1/2 have high sensitivity in high-risk breast/ovarian cancer families. Sequence analysis of RNA does not identify any variants undetected by current analysis of BRCA1/2. However, RNA analysis clarified the pathogenicity of variants of unknown significance detected by current methods. The low diagnostic uplift achieved through sequence analysis of the other known breast/ovarian cancer susceptibility genes indicates that further high-risk genes remain to be identified.

  9. Sensitivity to mental effort and test-retest reliability of heart rate variability measures in healthy seniors.

    PubMed

    Mukherjee, Shalini; Yadav, Rajeev; Yung, Iris; Zajdel, Daniel P; Oken, Barry S

    2011-10-01

    To determine (1) whether heart rate variability (HRV) was a sensitive and reliable measure in mental effort tasks carried out by healthy seniors and (2) whether non-linear approaches to HRV analysis, in addition to traditional time and frequency domain approaches were useful to study such effects. Forty healthy seniors performed two visual working memory tasks requiring different levels of mental effort, while ECG was recorded. They underwent the same tasks and recordings 2 weeks later. Traditional and 13 non-linear indices of HRV including Poincaré, entropy and detrended fluctuation analysis (DFA) were determined. Time domain, especially mean R-R interval (RRI), frequency domain and, among non-linear parameters - Poincaré and DFA were the most reliable indices. Mean RRI, time domain and Poincaré were also the most sensitive to different mental effort task loads and had the largest effect size. Overall, linear measures were the most sensitive and reliable indices to mental effort. In non-linear measures, Poincaré was the most reliable and sensitive, suggesting possible usefulness as an independent marker in cognitive function tasks in healthy seniors. A large number of HRV parameters was both reliable as well as sensitive indices of mental effort, although the simple linear methods were the most sensitive. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  10. Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay

    NASA Astrophysics Data System (ADS)

    Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander

    2018-06-01

    Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).

  11. Sensitivity of predicted bioaerosol exposure from open windrow composting facilities to ADMS dispersion model parameters.

    PubMed

    Douglas, P; Tyrrel, S F; Kinnersley, R P; Whelan, M; Longhurst, P J; Walsh, K; Pollard, S J T; Drew, G H

    2016-12-15

    Bioaerosols are released in elevated quantities from composting facilities and are associated with negative health effects, although dose-response relationships are not well understood, and require improved exposure classification. Dispersion modelling has great potential to improve exposure classification, but has not yet been extensively used or validated in this context. We present a sensitivity analysis of the ADMS dispersion model specific to input parameter ranges relevant to bioaerosol emissions from open windrow composting. This analysis provides an aid for model calibration by prioritising parameter adjustment and targeting independent parameter estimation. Results showed that predicted exposure was most sensitive to the wet and dry deposition modules and the majority of parameters relating to emission source characteristics, including pollutant emission velocity, source geometry and source height. This research improves understanding of the accuracy of model input data required to provide more reliable exposure predictions. Copyright © 2016. Published by Elsevier Ltd.

  12. Development of computer-aided design system of elastic sensitive elements of automatic metering devices

    NASA Astrophysics Data System (ADS)

    Kalinkina, M. E.; Kozlov, A. S.; Labkovskaia, R. I.; Pirozhnikova, O. I.; Tkalich, V. L.; Shmakov, N. A.

    2018-05-01

    The object of research is the element base of devices of control and automation systems, including in its composition annular elastic sensitive elements, methods of their modeling, calculation algorithms and software complexes for automation of their design processes. The article is devoted to the development of the computer-aided design system of elastic sensitive elements used in weight- and force-measuring automation devices. Based on the mathematical modeling of deformation processes in a solid, as well as the results of static and dynamic analysis, the calculation of elastic elements is given using the capabilities of modern software systems based on numerical simulation. In the course of the simulation, the model was a divided hexagonal grid of finite elements with a maximum size not exceeding 2.5 mm. The results of modal and dynamic analysis are presented in this article.

  13. Sensitivity of the Positive and Negative Syndrome Scale (PANSS) in Detecting Treatment Effects via Network Analysis.

    PubMed

    Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P

    2017-12-01

    Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.

  14. Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.

    PubMed

    Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia

    2016-01-01

    To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.

  15. The countermovement jump to monitor neuromuscular status: A meta-analysis.

    PubMed

    Claudino, João Gustavo; Cronin, John; Mezêncio, Bruno; McMaster, Daniel Travis; McGuigan, Michael; Tricoli, Valmor; Amadio, Alberto Carlos; Serrão, Julio Cerca

    2017-04-01

    The primary objective of this meta-analysis was to compare countermovement jump (CMJ) performance in studies that reported the highest value as opposed to average value for the purposes of monitoring neuromuscular status (i.e., fatigue and supercompensation). The secondary aim was to determine the sensitivity of the dependent variables. Systematic review with meta-analysis. The meta-analysis was conducted on the highest or average of a number of CMJ variables. Multiple literature searches were undertaken in Pubmed, Scopus, and Web of Science to identify articles utilizing CMJ to monitor training status. Effect sizes (ES) with 95% confidence interval (95% CI) were calculated using the mean and standard deviation of the pre- and post-testing data. The coefficient of variation (CV) with 95% CI was also calculated to assess the level of instability of each variable. Heterogeneity was assessed using a random-effects model. 151 articles were included providing a total of 531 ESs for the meta-analyses; 85.4% of articles used highest CMJ height, 13.2% used average and 1.3% used both when reporting changes in CMJ performance. Based on the meta-analysis, average CMJ height was more sensitive than highest CMJ height in detecting CMJ fatigue and supercompensation. Furthermore, other CMJ variables such as peak power, mean power, peak velocity, peak force, mean impulse, and power were sensitive in tracking the supercompensation effects of training. The average CMJ height was more sensitive than highest CMJ height in monitoring neuromuscular status; however, further investigation is needed to determine the sensitivity of other CMJ performance variables. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  16. Identification of mutant phenotypes associated with loss of individual microRNAs in sensitized genetic backgrounds in Caenorhabditis elegans

    PubMed Central

    Brenner, John L.; Jasiewicz, Kristen L.; Fahley, Alisha F.; Kemp, Benedict J.; Abbott, Allison L.

    2010-01-01

    Summary MicroRNAs (miRNAs) are small, non-coding RNAs that regulate the translation and/or the stability of their mRNA targets. Previous work showed that for most miRNA genes of C. elegans, single gene knockouts did not result in detectable mutant phenotypes [1]. This may be due, in part, to functional redundancy between miRNAs. However, in most cases, worms carrying deletions of all members of a miRNA family do not display strong mutant phenotypes [2]. They may function together with unrelated miRNAs or with non-miRNA genes in regulatory networks, possibly to ensure the robustness of developmental mechanisms. To test this, we examined worms lacking individual miRNAs in genetically sensitized backgrounds. These include genetic backgrounds with reduced processing and activity of all miRNAs or with reduced activity of a wide array of regulatory pathways [3]. Using these two approaches, mutant phenotypes were identified for 25 out of 31 miRNAs included in this analysis. Our findings describe biological roles for individual miRNAs and suggest that use of sensitized genetic backgrounds provides an efficient approach for miRNA functional analysis. PMID:20579881

  17. The Diagnostic Value of Capillary Refill Time for Detecting Serious Illness in Children: A Systematic Review and Meta-Analysis

    PubMed Central

    Fleming, Susannah; Gill, Peter; Jones, Caroline; Taylor, James A.; Van den Bruel, Ann; Heneghan, Carl; Roberts, Nia; Thompson, Matthew

    2015-01-01

    Importance Capillary refill time (CRT) is widely recommended as part of the routine assessment of unwell children. Objective To determine the diagnostic value of capillary refill time for a range of serious outcomes in children. Methods We searched Medline, Embase and CINAHL from inception to June 2014. We included studies that measured both capillary refill time and a relevant clinical outcome such as mortality, dehydration, meningitis, or other serious illnesses in children aged up to 18 years of age. We screened 1,265 references, of which 24 papers were included in this review. Where sufficient studies were available, we conducted meta-analysis and constructed hierarchical summary ROC curves. Results Meta-analysis on the relationship between capillary refill time and mortality resulted in sensitivity of 34.6% (95% CI 23.9 to 47.1%), specificity 92.3% (88.6 to 94.8%), positive likelihood ratio 4.49 (3.06 to 6.57), and negative likelihood ratio 0.71 (0.60 to 0.84). Studies of children attending Emergency Departments with vomiting and diarrhea showed that capillary refill time had specificity of 89 to 94% for identifying 5% dehydration, but sensitivity ranged from 0 to 94%. This level of heterogeneity precluded formal meta-analysis of this outcome. Meta-analysis was not possible for other outcomes due to insufficient data, but we found consistently high specificity for a range of outcomes including meningitis, sepsis, admission to hospital, hypoxia, severity of illness and dengue. Conclusions Our results show that capillary refill time is a specific sign, indicating that it can be used as a “red-flag”: children with prolonged capillary refill time have a four-fold risk of dying compared to children with normal capillary refill time. The low sensitivity means that a normal capillary refill time should not reassure clinicians. PMID:26375953

  18. Neutrophil/lymphocyte ratio and platelet/lymphocyte ratio in mood disorders: A meta-analysis.

    PubMed

    Mazza, Mario Gennaro; Lucchi, Sara; Tringali, Agnese Grazia Maria; Rossetti, Aurora; Botti, Eugenia Rossana; Clerici, Massimo

    2018-06-08

    The immune and inflammatory system is involved in the etiology of mood disorders. Neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio (PLR) and monocyte/lymphocyte ratio (MLR) are inexpensive and reproducible biomarkers of inflammation. This is the first meta-analysis exploring the role of NLR and PLR in mood disorder. We identified 11 studies according to our inclusion criteria from the main Electronic Databases. Meta-analyses were carried out generating pooled standardized mean differences (SMDs) between index and healthy controls (HC). Heterogeneity was estimated. Relevant sensitivity and meta-regression analyses were conducted. Subjects with bipolar disorder (BD) had higher NLR and PLR as compared with HC (respectively SMD = 0.672; p < 0.001; I 2  = 82.4% and SMD = 0.425; p = 0.048; I 2  = 86.53%). Heterogeneity-based sensitivity analyses confirmed these findings. Subgroup analysis evidenced an influence of bipolar phase on the overall estimate whit studies including subjects in manic and any bipolar phase showing a significantly higher NLR and PLR as compared with HC whereas the effect was not significant among studies including only euthymic bipolar subjects. Meta-regression showed that age and sex influenced the relationship between BD and NLR but not the relationship between BD and PLR. Meta-analysis was not carried out for MLR because our search identified only one study when comparing BD to HC, and only one study when comparing MDD to HC. Subjects with major depressive disorder (MDD) had higher NLR as compared with HC (SMD = 0.670; p = 0.028; I 2  = 89.931%). Heterogeneity-based sensitivity analyses and meta-regression confirmed these findings. Our meta-analysis supports the hypothesis that an inflammatory activation occurs in mood disorders and NLR and PLR may be useful to detect this activation. More researches including comparison of NLR, PLR and MLR between different bipolar phases and between BD and MDD are needed. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Instructional Psychology 1976 - 1981,

    DTIC Science & Technology

    1982-06-01

    business it is to carry out applied work in the design of instructional content and delivery. These organizations include specialized divisions of...34learning disabilities" label: An experimental analysis. Comtemporary Educational Psychology, 1977, 2, 292-297. Allington, R. L. Sensitivity to

  20. Cost analysis of school-based intermittent screening and treatment of malaria in Kenya

    PubMed Central

    2011-01-01

    Background The control of malaria in schools is receiving increasing attention, but there remains currently no consensus as to the optimal intervention strategy. This paper analyses the costs of intermittent screening and treatment (IST) of malaria in schools, implemented as part of a cluster-randomized controlled trial on the Kenyan coast. Methods Financial and economic costs were estimated using an ingredients approach whereby all resources required in the delivery of IST are quantified and valued. Sensitivity analysis was conducted to investigate how programme variation affects costs and to identify potential cost savings in the future implementation of IST. Results The estimated financial cost of IST per child screened is US$ 6.61 (economic cost US$ 6.24). Key contributors to cost were salary costs (36%) and malaria rapid diagnostic tests (RDT) (22%). Almost half (47%) of the intervention cost comprises redeployment of existing resources including health worker time and use of hospital vehicles. Sensitivity analysis identified changes to intervention delivery that can reduce programme costs by 40%, including use of alternative RDTs and removal of supervised treatment. Cost-effectiveness is also likely to be highly sensitive to the proportion of children found to be RDT-positive. Conclusion In the current context, school-based IST is a relatively expensive malaria intervention, but reducing the complexity of delivery can result in considerable savings in the cost of intervention. (Costs are reported in US$ 2010). PMID:21933376

  1. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  2. The diagnostic accuracy of magnetic resonance venography in the detection of deep venous thrombosis: a systematic review and meta-analysis.

    PubMed

    Abdalla, G; Fawzi Matuk, R; Venugopal, V; Verde, F; Magnuson, T H; Schweitzer, M A; Steele, K E

    2015-08-01

    To search the literature for further evidence for the use of magnetic resonance venography (MRV) in the detection of suspected DVT and to re-evaluate the accuracy of MRV in the detection of suspected deep vein thrombosis (DVT). PubMed, EMBASE, Scopus, Cochrane, and Web of Science were searched. Study quality and the risk of bias were evaluated using the QUADAS 2. A random effects meta-analysis including subgroup and sensitivity analyses were performed. The search resulted in 23 observational studies all from academic centres. Sixteen articles were included in the meta-analysis. The summary estimates for MRV as a diagnostic non-invasive tool revealed a sensitivity of 93% (95% confidence interval [CI]: 89% to 95%) and specificity of 96% (95% CI: 94% to 97%). The heterogeneity of the studies was high. Inconsistency (I2) for sensitivity and specificity was 80.7% and 77.9%, respectively. Further studies investigating the use of MRV in the detection of suspected DVT did not offer further evidence to support the replacement of ultrasound with MRV as the first-line investigation. However, MRV may offer an alternative tool in the detection/diagnosis of DVT for whom ultrasound is inadequate or not feasible (such as in the obese patient). Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  3. High-resolution melting (HRM) re-analysis of a polyposis patients cohort reveals previously undetected heterozygous and mosaic APC gene mutations.

    PubMed

    Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J

    2015-06-01

    Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.

  4. A Content Analysis of E-mail Communication between Patients and Their Providers: Patients Get the Message

    PubMed Central

    White, Casey B.; Moyer, Cheryl A.; Stern, David T.; Katz, Steven J.

    2004-01-01

    Objective: E-mail use in the clinical setting has been slow to diffuse for several reasons, including providers' concerns about patients' inappropriate and inefficient use of the technology. This study examined the content of a random sample of patient–physician e-mail messages to determine the validity of those concerns. Design: A qualitative analysis of patient–physician e-mail messages was performed. Measurements: A total of 3,007 patient–physician e-mail messages were collected over 11 months as part of a randomized, controlled trial of a triage-based e-mail system in two primary care centers (including 98 physicians); 10% of messages were randomly selected for review. Messages were coded across such domains as message type, number of requests per e-mail, inclusion of sensitive content, necessity of a physician response, and message tone. Results: The majority (82.8%) of messages addressed a single issue. The most common message types included information updates to the physicians (41.4%), prescription renewals (24.2%), health questions (13.2%), questions about test results (10.9%), referrals (8.8%), “other” (including thank yous, apologies) (8.8%), appointments (5.4%), requests for non-health-related information (4.8%), and billing questions (0.3%). Overall, messages were concise, formal, and medically relevant. Very few (5.1%) included sensitive content, and none included urgent messages. Less than half (43.2%) required a physician response. Conclusion: A triage-based e-mail system promoted e-mail exchanges appropriate for primary care. Most patients adhered to guidelines aimed at focusing content, limiting the number of requests per message, and avoiding urgent requests or highly sensitive content. Thus, physicians' concerns about the content of patients' e-mails may be unwarranted. PMID:15064295

  5. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    PubMed Central

    Yang, Bo; Hu, Di; Wu, Lei

    2016-01-01

    A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s)2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by comprehensive simulation and analysis. PMID:27399716

  6. Nursing students' understanding of factors influencing ethical sensitivity: A qualitative study.

    PubMed

    Borhani, Fariba; Abbaszadeh, Abbas; Mohsenpour, Mohaddeseh

    2013-07-01

    Ethical sensitivity is considered as a component of professional competency of nurses. Its effects on improvement of nurses' ethical performance and the therapeutic relationship between nurses and patients have been reported. However, very limited studies have evaluated ethical sensitivity. Since no previous Iranian research has been conducted in this regard, the present study aimed to review nursing students' understanding of effective factors on ethical sensitivity. This qualitative study was performed in Kerman, Iran, during 2009. It used semi-structured individual interviews with eight MSc nursing students to assess their viewpoints. It also included two focus groups. Purposive sampling was continued until data saturation. Data were analyzed using manifest content analysis. The students' understanding of factors influencing ethical sensitivity were summarized in five main themes including individual and spiritual characteristics, education, mutual understanding, internal and external controls, and experience of an immoral act. The findings of this study create a unique framework for sensitization of nurses in professional performance. The application of these factors in human resource management is reinforcement of positive aspects and decrease in negative aspects, in education can use for educational objectives setting, and in research can designing studies based on this framework and making related tools. It is noteworthy that presented classification was influenced by students themselves and mentioned to a kind of learning activity by them.

  7. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  8. MUSiC—An Automated Scan for Deviations between Data and Monte Carlo Simulation

    NASA Astrophysics Data System (ADS)

    Meyer, Arnd

    2010-02-01

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  9. MUSiC - An Automated Scan for Deviations between Data and Monte Carlo Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Arnd

    2010-02-10

    A model independent analysis approach is presented, systematically scanning the data for deviations from the standard model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of event generators. The approach is sensitive to a variety of models of new physics, including those not yet thought of.

  10. A case study of the sensitivity of forecast skill to data and data analysis techniques

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Atlas, R.; Halem, M.; Susskind, J.

    1983-01-01

    A series of experiments have been conducted to examine the sensitivity of forecast skill to various data and data analysis techniques for the 0000 GMT case of January 21, 1979. These include the individual components of the FGGE observing system, the temperatures obtained with different satellite retrieval methods, and the method of vertical interpolation between the mandatory pressure analysis levels and the model sigma levels. It is found that NESS TIROS-N infrared retrievals seriously degrade a rawinsonde-only analysis over land, resulting in a poorer forecast over North America. Less degradation in the 72-hr forecast skill at sea level and some improvement at 500 mb is noted, relative to the control with TIROS-N retrievals produced with a physical inversion method which utilizes a 6-hr forecast first guess. NESS VTPR oceanic retrievals lead to an improved forecast over North America when added to the control.

  11. Uncertainty analysis and global sensitivity analysis of techno-economic assessments for biodiesel production.

    PubMed

    Tang, Zhang-Chun; Zhenzhou, Lu; Zhiwen, Liu; Ningcong, Xiao

    2015-01-01

    There are various uncertain parameters in the techno-economic assessments (TEAs) of biodiesel production, including capital cost, interest rate, feedstock price, maintenance rate, biodiesel conversion efficiency, glycerol price and operating cost. However, fewer studies focus on the influence of these parameters on TEAs. This paper investigated the effects of these parameters on the life cycle cost (LCC) and the unit cost (UC) in the TEAs of biodiesel production. The results show that LCC and UC exhibit variations when involving uncertain parameters. Based on the uncertainty analysis, three global sensitivity analysis (GSA) methods are utilized to quantify the contribution of an individual uncertain parameter to LCC and UC. The GSA results reveal that the feedstock price and the interest rate produce considerable effects on the TEAs. These results can provide a useful guide for entrepreneurs when they plan plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Value of high-sensitivity C-reactive protein assays in predicting atrial fibrillation recurrence: a systematic review and meta-analysis.

    PubMed

    Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang

    2014-02-20

    We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Systematic review and meta-analysis. PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion.

  13. Diagnostic accuracy of the aspartate aminotransferase-to-platelet ratio index for the prediction of hepatitis B-related fibrosis: a leading meta-analysis

    PubMed Central

    2012-01-01

    Background The aspartate aminotransferase-to-platelet ratio index (APRI), a tool with limited expense and widespread availability, is a promising noninvasive alternative to liver biopsy for detecting hepatic fibrosis. The objective of this study was to systematically review the performance of the APRI in predicting significant fibrosis and cirrhosis in hepatitis B-related fibrosis. Methods Areas under summary receiver operating characteristic curves (AUROC), sensitivity and specificity were used to examine the accuracy of the APRI for the diagnosis of hepatitis B-related significant fibrosis and cirrhosis. Heterogeneity was explored using meta-regression. Results Nine studies were included in this meta-analysis (n = 1,798). Prevalence of significant fibrosis and cirrhosis were 53.1% and 13.5%, respectively. The summary AUCs of the APRI for significant fibrosis and cirrhosis were 0.79 and 0.75, respectively. For significant fibrosis, an APRI threshold of 0.5 was 84% sensitive and 41% specific. At the cutoff of 1.5, the summary sensitivity and specificity were 49% and 84%, respectively. For cirrhosis, an APRI threshold of 1.0-1.5 was 54% sensitive and 78% specific. At the cutoff of 2.0, the summary sensitivity and specificity were 28% and 87%, respectively. Meta-regression analysis indicated that the APRI accuracy for both significant fibrosis and cirrhosis was affected by histological classification systems, but not influenced by the interval between Biopsy & APRI or blind biopsy. Conclusion Our meta-analysis suggests that APRI show limited value in identifying hepatitis B-related significant fibrosis and cirrhosis. PMID:22333407

  14. PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.

    PubMed

    Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael

    2015-07-28

    obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  15. An analysis of sensitivity of CLIMEX parameters in mapping species potential distribution and the broad-scale changes observed with minor variations in parameters values: an investigation using open-field Solanum lycopersicum and Neoleucinodes elegantalis as an example

    NASA Astrophysics Data System (ADS)

    da Silva, Ricardo Siqueira; Kumar, Lalit; Shabani, Farzin; Picanço, Marcelo Coutinho

    2018-04-01

    A sensitivity analysis can categorize levels of parameter influence on a model's output. Identifying parameters having the most influence facilitates establishing the best values for parameters of models, providing useful implications in species modelling of crops and associated insect pests. The aim of this study was to quantify the response of species models through a CLIMEX sensitivity analysis. Using open-field Solanum lycopersicum and Neoleucinodes elegantalis distribution records, and 17 fitting parameters, including growth and stress parameters, comparisons were made in model performance by altering one parameter value at a time, in comparison to the best-fit parameter values. Parameters that were found to have a greater effect on the model results are termed "sensitive". Through the use of two species, we show that even when the Ecoclimatic Index has a major change through upward or downward parameter value alterations, the effect on the species is dependent on the selection of suitability categories and regions of modelling. Two parameters were shown to have the greatest sensitivity, dependent on the suitability categories of each species in the study. Results enhance user understanding of which climatic factors had a greater impact on both species distributions in our model, in terms of suitability categories and areas, when parameter values were perturbed by higher or lower values, compared to the best-fit parameter values. Thus, the sensitivity analyses have the potential to provide additional information for end users, in terms of improving management, by identifying the climatic variables that are most sensitive.

  16. Recent Advances in Multidisciplinary Analysis and Optimization, part 3

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: aircraft design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  17. Recent Advances in Multidisciplinary Analysis and Optimization, part 2

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  18. Recent Advances in Multidisciplinary Analysis and Optimization, part 1

    NASA Technical Reports Server (NTRS)

    Barthelemy, Jean-Francois M. (Editor)

    1989-01-01

    This three-part document contains a collection of technical papers presented at the Second NASA/Air Force Symposium on Recent Advances in Multidisciplinary Analysis and Optimization, held September 28-30, 1988 in Hampton, Virginia. The topics covered include: helicopter design, aeroelastic tailoring, control of aeroelastic structures, dynamics and control of flexible structures, structural design, design of large engineering systems, application of artificial intelligence, shape optimization, software development and implementation, and sensitivity analysis.

  19. GLSENS: A Generalized Extension of LSENS Including Global Reactions and Added Sensitivity Analysis for the Perfectly Stirred Reactor

    NASA Technical Reports Server (NTRS)

    Bittker, David A.

    1996-01-01

    A generalized version of the NASA Lewis general kinetics code, LSENS, is described. The new code allows the use of global reactions as well as molecular processes in a chemical mechanism. The code also incorporates the capability of performing sensitivity analysis calculations for a perfectly stirred reactor rapidly and conveniently at the same time that the main kinetics calculations are being done. The GLSENS code has been extensively tested and has been found to be accurate and efficient. Nine example problems are presented and complete user instructions are given for the new capabilities. This report is to be used in conjunction with the documentation for the original LSENS code.

  20. Static and dynamic structural-sensitivity derivative calculations in the finite-element-based Engineering Analysis Language (EAL) system

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.; Adelman, H. M.

    1984-01-01

    The implementation of static and dynamic structural-sensitivity derivative calculations in a general purpose, finite-element computer program denoted the Engineering Analysis Language (EAL) System is described. Derivatives are calculated with respect to structural parameters, specifically, member sectional properties including thicknesses, cross-sectional areas, and moments of inertia. Derivatives are obtained for displacements, stresses, vibration frequencies and mode shapes, and buckling loads and mode shapes. Three methods for calculating derivatives are implemented (analytical, semianalytical, and finite differences), and comparisons of computer time and accuracy are made. Results are presented for four examples: a swept wing, a box beam, a stiffened cylinder with a cutout, and a space radiometer-antenna truss.

  1. Surface Acoustic Wave Monitor for Deposition and Analysis of Ultra-Thin Films

    NASA Technical Reports Server (NTRS)

    Hines, Jacqueline H. (Inventor)

    2015-01-01

    A surface acoustic wave (SAW) based thin film deposition monitor device and system for monitoring the deposition of ultra-thin films and nanomaterials and the analysis thereof is characterized by acoustic wave device embodiments that include differential delay line device designs, and which can optionally have integral reference devices fabricated on the same substrate as the sensing device, or on a separate device in thermal contact with the film monitoring/analysis device, in order to provide inherently temperature compensated measurements. These deposition monitor and analysis devices can include inherent temperature compensation, higher sensitivity to surface interactions than quartz crystal microbalance (QCM) devices, and the ability to operate at extreme temperatures.

  2. Robust Optimization and Sensitivity Analysis with Multi-Objective Genetic Algorithms: Single- and Multi-Disciplinary Applications

    DTIC Science & Technology

    2007-01-01

    multi-disciplinary optimization with uncertainty. Robust optimization and sensitivity analysis is usually used when an optimization model has...formulation is introduced in Section 2.3. We briefly discuss several definitions used in the sensitivity analysis in Section 2.4. Following in...2.5. 2.4 SENSITIVITY ANALYSIS In this section, we discuss several definitions used in Chapter 5 for Multi-Objective Sensitivity Analysis . Inner

  3. Clinical evaluation and validation of laboratory methods for the diagnosis of Bordetella pertussis infection: Culture, polymerase chain reaction (PCR) and anti-pertussis toxin IgG serology (IgG-PT).

    PubMed

    Lee, Adria D; Cassiday, Pamela K; Pawloski, Lucia C; Tatti, Kathleen M; Martin, Monte D; Briere, Elizabeth C; Tondella, M Lucia; Martin, Stacey W

    2018-01-01

    The appropriate use of clinically accurate diagnostic tests is essential for the detection of pertussis, a poorly controlled vaccine-preventable disease. The purpose of this study was to estimate the sensitivity and specificity of different diagnostic criteria including culture, multi-target polymerase chain reaction (PCR), anti-pertussis toxin IgG (IgG-PT) serology, and the use of a clinical case definition. An additional objective was to describe the optimal timing of specimen collection for the various tests. Clinical specimens were collected from patients with cough illness at seven locations across the United States between 2007 and 2011. Nasopharyngeal and blood specimens were collected from each patient during the enrollment visit. Patients who had been coughing for ≤ 2 weeks were asked to return in 2-4 weeks for collection of a second, convalescent blood specimen. Sensitivity and specificity of each diagnostic test were estimated using three methods-pertussis culture as the "gold standard," composite reference standard analysis (CRS), and latent class analysis (LCA). Overall, 868 patients were enrolled and 13.6% were B. pertussis positive by at least one diagnostic test. In a sample of 545 participants with non-missing data on all four diagnostic criteria, culture was 64.0% sensitive, PCR was 90.6% sensitive, and both were 100% specific by LCA. CRS and LCA methods increased the sensitivity estimates for convalescent serology and the clinical case definition over the culture-based estimates. Culture and PCR were most sensitive when performed during the first two weeks of cough; serology was optimally sensitive after the second week of cough. Timing of specimen collection in relation to onset of illness should be considered when ordering diagnostic tests for pertussis. Consideration should be given to including IgG-PT serology as a confirmatory test in the Council of State and Territorial Epidemiologists (CSTE) case definition for pertussis.

  4. Application of the High Resolution Melting analysis for genetic mapping of Sequence Tagged Site markers in narrow-leafed lupin (Lupinus angustifolius L.).

    PubMed

    Kamel, Katarzyna A; Kroc, Magdalena; Święcicki, Wojciech

    2015-01-01

    Sequence tagged site (STS) markers are valuable tools for genetic and physical mapping that can be successfully used in comparative analyses among related species. Current challenges for molecular markers genotyping in plants include the lack of fast, sensitive and inexpensive methods suitable for sequence variant detection. In contrast, high resolution melting (HRM) is a simple and high-throughput assay, which has been widely applied in sequence polymorphism identification as well as in the studies of genetic variability and genotyping. The present study is the first attempt to use the HRM analysis to genotype STS markers in narrow-leafed lupin (Lupinus angustifolius L.). The sensitivity and utility of this method was confirmed by the sequence polymorphism detection based on melting curve profiles in the parental genotypes and progeny of the narrow-leafed lupin mapping population. Application of different approaches, including amplicon size and a simulated heterozygote analysis, has allowed for successful genetic mapping of 16 new STS markers in the narrow-leafed lupin genome.

  5. Clinical color vision testing and correlation with visual function.

    PubMed

    Zhao, Jiawei; Davé, Sarita B; Wang, Jiangxia; Subramanian, Prem S

    2015-09-01

    To determine if Hardy-Rand-Rittler (H-R-R) and Ishihara testing are accurate estimates of color vision in subjects with acquired visual dysfunction. Assessment of diagnostic tools. Twenty-two subjects with optic neuropathy (aged 18-65) and 18 control subjects were recruited prospectively from an outpatient clinic. Individuals with visual acuity (VA) <20/200 or with congenital color blindness were excluded. All subjects underwent a comprehensive eye examination including VA, color vision, and contrast sensitivity testing. Color vision was assessed using H-R-R and Ishihara plates and Farnsworth D-15 (D-15) discs. D-15 is the accepted standard for detecting and classifying color vision deficits. Contrast sensitivity was measured using Pelli-Robson contrast sensitivity charts. No relationship was found between H-R-R and D-15 scores (P = .477). H-R-R score and contrast sensitivity were positively correlated (P = .003). On multivariate analysis, contrast sensitivity (β = 8.61, P < .001) and VA (β = 2.01, P = .022) both showed association with H-R-R scores. Similar to H-R-R, Ishihara score did not correlate with D-15 score (P = .973), but on multivariate analysis was related to contrast sensitivity (β = 8.69, P < .001). H-R-R and Ishihara scores had an equivalent relationship with contrast sensitivity (P = .069). Neither H-R-R nor Ishihara testing appears to assess color identification in patients with optic neuropathy. Both H-R-R and Ishihara testing are correlated with contrast sensitivity, and these tests may be useful clinical surrogates for contrast sensitivity testing. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. The Effect of Regular Exercise on Insulin Sensitivity in Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis

    PubMed Central

    Hackett, Daniel A.; Baker, Michael K.

    2016-01-01

    The purpose of this study was to examine the effect of regular exercise training on insulin sensitivity in adults with type 2 diabetes mellitus (T2DM) using the pooled data available from randomised controlled trials. In addition, we sought to determine whether short-term periods of physical inactivity diminish the exercise-induced improvement in insulin sensitivity. Eligible trials included exercise interventions that involved ≥3 exercise sessions, and reported a dynamic measurement of insulin sensitivity. There was a significant pooled effect size (ES) for the effect of exercise on insulin sensitivity (ES, –0.588; 95% confidence interval [CI], –0.816 to –0.359; P<0.001). Of the 14 studies included for meta-analyses, nine studies reported the time of data collection from the last exercise bout. There was a significant improvement in insulin sensitivity in favour of exercise versus control between 48 and 72 hours after exercise (ES, –0.702; 95% CI, –1.392 to –0.012; P=0.046); and this persisted when insulin sensitivity was measured more than 72 hours after the last exercise session (ES, –0.890; 95% CI, –1.675 to –0.105; P=0.026). Regular exercise has a significant benefit on insulin sensitivity in adults with T2DM and this may persist beyond 72 hours after the last exercise session. PMID:27535644

  7. ANIMAL DNA IN PCR REAGENTS PLAGUES ANCIENT DNA RESEARCH

    EPA Science Inventory

    Ancient DNA analysis is becoming widespread. These studies use polymerase chain reaction (PCR) to amplify minute quantities of heavily damaged template. Unusual steps are taken to achieve the sensitivity necessary to detect ancient DNA, including high-cycle PCR amplification targ...

  8. Radiation imaging apparatus

    DOEpatents

    Anger, H.O.; Martin, D.C.; Lampton, M.L.

    1983-07-26

    A radiation imaging system using a charge multiplier and a position sensitive anode in the form of periodically arranged sets of interconnected anode regions for detecting the position of the centroid of a charge cloud arriving thereat from the charge multiplier. Various forms of improved position sensitive anodes having single plane electrode connections are disclosed. Various analog and digital signal processing systems are disclosed, including systems which use the fast response of microchannel plates, anodes and preamps to perform scintillation pulse height analysis digitally. 15 figs.

  9. Head-To-Head Comparison Between High- and Standard-b-Value DWI for Detecting Prostate Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Kim, Sang Youn; Cho, Jeong Yeon; Kim, Seung Hyup

    2018-01-01

    The purpose of this study was to perform a head-to-head comparison between high-b-value (> 1000 s/mm 2 ) and standard-b-value (800-1000 s/mm 2 ) DWI regarding diagnostic performance in the detection of prostate cancer. The MEDLINE and EMBASE databases were searched up to April 1, 2017. The analysis included diagnostic accuracy studies in which high- and standard-b-value DWI were used for prostate cancer detection with histopathologic examination as the reference standard. Methodologic quality was assessed with the revised Quality Assessment of Diagnostic Accuracy Studies tool. Sensitivity and specificity of all studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Meta-regression and multiple-subgroup analyses were performed to compare the diagnostic performances of high- and standard-b-value DWI. Eleven studies (789 patients) were included. High-b-value DWI had greater pooled sensitivity (0.80 [95% CI, 0.70-0.87]) (p = 0.03) and specificity (0.92 [95% CI, 0.87-0.95]) (p = 0.01) than standard-b-value DWI (sensitivity, 0.78 [95% CI, 0.66-0.86]); specificity, 0.87 [95% CI, 0.77-0.93] (p < 0.01). Multiple-subgroup analyses showed that specificity was consistently higher for high- than for standard-b-value DWI (p ≤ 0.05). Sensitivity was significantly higher for high- than for standard-b-value DWI only in the following subgroups: peripheral zone only, transition zone only, multiparametric protocol (DWI and T2-weighted imaging), visual assessment of DW images, and per-lesion analysis (p ≤ 0.04). In a head-to-head comparison, high-b-value DWI had significantly better sensitivity and specificity for detection of prostate cancer than did standard-b-value DWI. Multiple-subgroup analyses showed that specificity was consistently superior for high-b-value DWI.

  10. Present and future of prophylactic antibiotics for severe acute pancreatitis

    PubMed Central

    Jiang, Kun; Huang, Wei; Yang, Xiao-Nan; Xia, Qing

    2012-01-01

    AIM: To investigate the role of prophylactic antibiotics in the reduction of mortality of severe acute pancreatitis (SAP) patients, which is highly questioned by more and more randomized controlled trials (RCTs) and meta-analyses. METHODS: An updated meta-analysis was performed. RCTs comparing prophylactic antibiotics for SAP with control or placebo were included for meta-analysis. The mortality outcomes were pooled for estimation, and re-pooled estimation was performed by the sensitivity analysis of an ideal large-scale RCT. RESULTS: Currently available 11 RCTs were included. Subgroup analysis showed that there was significant reduction of mortality rate in the period before 2000, while no significant reduction in the period from 2000 [Risk Ratio, (RR) = 1.01, P = 0.98]. Funnel plot indicated that there might be apparent publication bias in the period before 2000. Sensitivity analysis showed that the RR of mortality rate ranged from 0.77 to 1.00 with a relatively narrow confidence interval (P < 0.05). However, the number needed to treat having a minor lower limit of the range (7-5096 patients) implied that certain SAP patients could still potentially prevent death by antibiotic prophylaxis. CONCLUSION: Current evidences do not support prophylactic antibiotics as a routine treatment for SAP, but the potentially benefited sub-population requires further investigations. PMID:22294832

  11. Meta-analysis of treatment with rabbit and horse antithymocyte globulin for aplastic anemia.

    PubMed

    Hayakawa, Jin; Kanda, Junya; Akahoshi, Yu; Harada, Naonori; Kameda, Kazuaki; Ugai, Tomotaka; Wada, Hidenori; Ishihara, Yuko; Kawamura, Koji; Sakamoto, Kana; Ashizawa, Masahiro; Sato, Miki; Terasako-Saito, Kiriko; Kimura, Shun-Ichi; Kikuchi, Misato; Yamazaki, Rie; Kako, Shinichi; Kanda, Yoshinobu

    2017-05-01

    Aplastic anemia patients who received rabbit antithymocyte globulin exhibited response and survival rates inferior to those who received horse antithymocyte globulin in several studies. Therefore, we conducted a meta-analysis to compare rabbit and horse antithymocyte globulin as immunosuppressive therapy for aplastic anemia. We searched online databases for studies that compared antithymocyte globulin regimens as first-line treatment for aplastic anemia, including both randomized and non-randomized controlled trials. The early mortality rate at 3 months and overall response rate at 6 months were evaluated. Thirteen studies were included in the analysis. The risk ratio (RR) of early mortality for rabbit vs. horse antithymocyte globulin was 1.33 [95% confidence interval (CI) 0.69-2.57; P = 0.39], with significant heterogeneity. A sensitivity analysis suggested higher early mortality rate in patients who received rabbit antithymocyte globulin. The overall response rate was significantly higher in patients who received horse antithymocyte globulin (RR 1.27; 95% CI 1.05-1.54; P = 0.015). In conclusion, in aplastic anemia patients treated with ATG, early mortality rate was not significantly different in patients receiving horse or rabbit ATG, although a sensitivity analysis showed higher early mortality in the rabbit ATG group. Horse ATG was associated with significantly higher response rate than rabbit ATG.

  12. 78 FR 2398 - Motorola Mobility LLC and Google Inc.; Analysis of Proposed Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-11

    ... responsible for making sure that your comment does not include any sensitive health information, like medical records or other individually identifiable health information. In addition, do not include any ``[t]rade... overnight service. Visit the Commission Web site at http://www.ftc.gov to read this Notice and the news...

  13. Detection of somatic mutations by high-resolution DNA melting (HRM) analysis in multiple cancers.

    PubMed

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S; Garcia-Closas, Montserrat; Sherman, Mark E; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P; Khan, Javed; Chanock, Stephen

    2011-01-17

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples.

  14. Detection of Somatic Mutations by High-Resolution DNA Melting (HRM) Analysis in Multiple Cancers

    PubMed Central

    Gonzalez-Bosquet, Jesus; Calcei, Jacob; Wei, Jun S.; Garcia-Closas, Montserrat; Sherman, Mark E.; Hewitt, Stephen; Vockley, Joseph; Lissowska, Jolanta; Yang, Hannah P.; Khan, Javed; Chanock, Stephen

    2011-01-01

    Identification of somatic mutations in cancer is a major goal for understanding and monitoring the events related to cancer initiation and progression. High resolution melting (HRM) curve analysis represents a fast, post-PCR high-throughput method for scanning somatic sequence alterations in target genes. The aim of this study was to assess the sensitivity and specificity of HRM analysis for tumor mutation screening in a range of tumor samples, which included 216 frozen pediatric small rounded blue-cell tumors as well as 180 paraffin-embedded tumors from breast, endometrial and ovarian cancers (60 of each). HRM analysis was performed in exons of the following candidate genes known to harbor established commonly observed mutations: PIK3CA, ERBB2, KRAS, TP53, EGFR, BRAF, GATA3, and FGFR3. Bi-directional sequencing analysis was used to determine the accuracy of the HRM analysis. For the 39 mutations observed in frozen samples, the sensitivity and specificity of HRM analysis were 97% and 87%, respectively. There were 67 mutation/variants in the paraffin-embedded samples, and the sensitivity and specificity for the HRM analysis were 88% and 80%, respectively. Paraffin-embedded samples require higher quantity of purified DNA for high performance. In summary, HRM analysis is a promising moderate-throughput screening test for mutations among known candidate genomic regions. Although the overall accuracy appears to be better in frozen specimens, somatic alterations were detected in DNA extracted from paraffin-embedded samples. PMID:21264207

  15. Data challenges in estimating the capacity value of solar photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  16. Data Challenges in Estimating the Capacity Value of Solar Photovoltaics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Our analysis also suggests that multiple years' historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  17. Data challenges in estimating the capacity value of solar photovoltaics

    DOE PAGES

    Gami, Dhruv; Sioshansi, Ramteen; Denholm, Paul

    2017-04-30

    We examine the robustness of solar capacity-value estimates to three important data issues. The first is the sensitivity to using hourly averaged as opposed to subhourly solar-insolation data. The second is the sensitivity to errors in recording and interpreting load data. The third is the sensitivity to using modeled as opposed to measured solar-insolation data. We demonstrate that capacity-value estimates of solar are sensitive to all three of these factors, with potentially large errors in the capacity-value estimate in a particular year. If multiple years of data are available, the biases introduced by using hourly averaged solar-insolation can be smoothedmore » out. Multiple years of data will not necessarily address the other data-related issues that we examine. Our analysis calls into question the accuracy of a number of solar capacity-value estimates relying exclusively on modeled solar-insolation data that are reported in the literature (including our own previous works). Lastly, our analysis also suggests that multiple years’ historical data should be used for remunerating solar generators for their capacity value in organized wholesale electricity markets.« less

  18. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  19. Analysis of Composite Panels Subjected to Thermo-Mechanical Loads

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Peters, Jeanne M.

    1999-01-01

    The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.

  20. Highly sensitive image-derived indices of water-stressed plants using hyperspectral imaging in SWIR and histogram analysis

    PubMed Central

    Kim, David M.; Zhang, Hairong; Zhou, Haiying; Du, Tommy; Wu, Qian; Mockler, Todd C.; Berezin, Mikhail Y.

    2015-01-01

    The optical signature of leaves is an important monitoring and predictive parameter for a variety of biotic and abiotic stresses, including drought. Such signatures derived from spectroscopic measurements provide vegetation indices – a quantitative method for assessing plant health. However, the commonly used metrics suffer from low sensitivity. Relatively small changes in water content in moderately stressed plants demand high-contrast imaging to distinguish affected plants. We present a new approach in deriving sensitive indices using hyperspectral imaging in a short-wave infrared range from 800 nm to 1600 nm. Our method, based on high spectral resolution (1.56 nm) instrumentation and image processing algorithms (quantitative histogram analysis), enables us to distinguish a moderate water stress equivalent of 20% relative water content (RWC). The identified image-derived indices 15XX nm/14XX nm (i.e. 1529 nm/1416 nm) were superior to common vegetation indices, such as WBI, MSI, and NDWI, with significantly better sensitivity, enabling early diagnostics of plant health. PMID:26531782

  1. SCARE: A post-processor program to MSC/NASTRAN for the reliability analysis of structural ceramic components

    NASA Technical Reports Server (NTRS)

    Gyekenyesi, J. P.

    1985-01-01

    A computer program was developed for calculating the statistical fast fracture reliability and failure probability of ceramic components. The program includes the two-parameter Weibull material fracture strength distribution model, using the principle of independent action for polyaxial stress states and Batdorf's shear-sensitive as well as shear-insensitive crack theories, all for volume distributed flaws in macroscopically isotropic solids. Both penny-shaped cracks and Griffith cracks are included in the Batdorf shear-sensitive crack response calculations, using Griffith's maximum tensile stress or critical coplanar strain energy release rate criteria to predict mixed mode fracture. Weibull material parameters can also be calculated from modulus of rupture bar tests, using the least squares method with known specimen geometry and fracture data. The reliability prediction analysis uses MSC/NASTRAN stress, temperature and volume output, obtained from the use of three-dimensional, quadratic, isoparametric, or axisymmetric finite elements. The statistical fast fracture theories employed, along with selected input and output formats and options, are summarized. An example problem to demonstrate various features of the program is included.

  2. Harnessing Connectivity in a Large-Scale Small-Molecule Sensitivity Dataset.

    PubMed

    Seashore-Ludlow, Brinton; Rees, Matthew G; Cheah, Jaime H; Cokol, Murat; Price, Edmund V; Coletti, Matthew E; Jones, Victor; Bodycombe, Nicole E; Soule, Christian K; Gould, Joshua; Alexander, Benjamin; Li, Ava; Montgomery, Philip; Wawer, Mathias J; Kuru, Nurdan; Kotz, Joanne D; Hon, C Suk-Yee; Munoz, Benito; Liefeld, Ted; Dančík, Vlado; Bittker, Joshua A; Palmer, Michelle; Bradner, James E; Shamji, Alykhan F; Clemons, Paul A; Schreiber, Stuart L

    2015-11-01

    Identifying genetic alterations that prime a cancer cell to respond to a particular therapeutic agent can facilitate the development of precision cancer medicines. Cancer cell-line (CCL) profiling of small-molecule sensitivity has emerged as an unbiased method to assess the relationships between genetic or cellular features of CCLs and small-molecule response. Here, we developed annotated cluster multidimensional enrichment analysis to explore the associations between groups of small molecules and groups of CCLs in a new, quantitative sensitivity dataset. This analysis reveals insights into small-molecule mechanisms of action, and genomic features that associate with CCL response to small-molecule treatment. We are able to recapitulate known relationships between FDA-approved therapies and cancer dependencies and to uncover new relationships, including for KRAS-mutant cancers and neuroblastoma. To enable the cancer community to explore these data, and to generate novel hypotheses, we created an updated version of the Cancer Therapeutic Response Portal (CTRP v2). We present the largest CCL sensitivity dataset yet available, and an analysis method integrating information from multiple CCLs and multiple small molecules to identify CCL response predictors robustly. We updated the CTRP to enable the cancer research community to leverage these data and analyses. ©2015 American Association for Cancer Research.

  3. Methylation-sensitive amplified polymorphism analysis of Verticillium wilt-stressed cotton (Gossypium).

    PubMed

    Wang, W; Zhang, M; Chen, H D; Cai, X X; Xu, M L; Lei, K Y; Niu, J H; Deng, L; Liu, J; Ge, Z J; Yu, S X; Wang, B H

    2016-10-06

    In this study, a methylation-sensitive amplification polymorphism analysis system was used to analyze DNA methylation level in three cotton accessions. Two disease-sensitive near-isogenic lines, PD94042 and IL41, and one disease-resistant Gossypium mustelinum accession were exposed to Verticillium wilt, to investigate molecular disease resistance mechanisms in cotton. We observed multiple different DNA methylation types across the three accessions following Verticillium wilt exposure. These included hypomethylation, hypermethylation, and other patterns. In general, the global DNA methylation level was significantly increased in the disease-resistant accession G. mustelinum following disease exposure. In contrast, there was no significant difference in the disease-sensitive accession PD94042, and a significant decrease was observed in IL41. Our results suggest that disease-resistant cotton might employ a mechanism to increase methylation level in response to disease stress. The differing methylation patterns, together with the increase in global DNA methylation level, might play important roles in tolerance to Verticillium wilt in cotton. Through cloning and analysis of differently methylated DNA sequences, we were also able to identify several genes that may contribute to disease resistance in cotton. Our results revealed the effect of DNA methylation on cotton disease resistance, and also identified genes that played important roles, which may shed light on the future cotton disease-resistant molecular breeding.

  4. Chapter 5: Modulation Excitation Spectroscopy with Phase-Sensitive Detection for Surface Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shulda, Sarah; Richards, Ryan M.

    Advancements in in situ spectroscopic techniques have led to significant progress being made in elucidating heterogeneous reaction mechanisms. The potential of these progressive methods is often limited only by the complexity of the system and noise in the data. Short-lived intermediates can be challenging, if not impossible, to identify with conventional spectra analysis means. Often equally difficult is separating signals that arise from active and inactive species. Modulation excitation spectroscopy combined with phase-sensitive detection analysis is a powerful tool for removing noise from the data while simultaneously revealing the underlying kinetics of the reaction. A stimulus is applied at amore » constant frequency to the reaction system, for example, a reactant cycled with an inert phase. Through mathematical manipulation of the data, any signal contributing to the overall spectra but not oscillating with the same frequency as the stimulus will be dampened or removed. With phase-sensitive detection, signals oscillating with the stimulus frequency but with various lag times are amplified providing valuable kinetic information. In this chapter, some examples are provided from the literature that have successfully used modulation excitation spectroscopy with phase-sensitive detection to uncover previously unobserved reaction intermediates and kinetics. Examples from a broad range of spectroscopic methods are included to provide perspective to the reader.« less

  5. Review-of-systems questionnaire as a predictive tool for psychogenic nonepileptic seizures.

    PubMed

    Robles, Liliana; Chiang, Sharon; Haneef, Zulfi

    2015-04-01

    Patients with refractory epilepsy undergo video-electroencephalography for seizure characterization, among whom approximately 10-30% will be discharged with the diagnosis of psychogenic nonepileptic seizures (PNESs). Clinical PNES predictors have been described but in general are not sensitive or specific. We evaluated whether multiple complaints in a routine review-of-system (ROS) questionnaire could serve as a sensitive and specific marker of PNESs. We performed a retrospective analysis of a standardized ROS questionnaire completed by patients with definite PNESs and epileptic seizures (ESs) diagnosed in our adult epilepsy monitoring unit. A multivariate analysis of covariance (MANCOVA) was used to determine whether groups with PNES and ES differed with respect to the percentage of complaints in the ROS questionnaire. Tenfold cross-validation was used to evaluate the predictive error of a logistic regression classifier for PNES status based on the percentage of positive complaints in the ROS questionnaire. A total of 44 patients were included for analysis. Patients with PNESs had a significantly higher number of complaints in the ROS questionnaire compared to patients with epilepsy. A threshold of 17% positive complaints achieved a 78% specificity and 85% sensitivity for discriminating between PNESs and ESs. We conclude that the routine ROS questionnaire may be a sensitive and specific predictive tool for discriminating between PNESs and ESs. Published by Elsevier Inc.

  6. Assessing sensitivity and specificity of the Manchester Triage System in the evaluation of acute coronary syndrome in adult patients in emergency care: a systematic review.

    PubMed

    Nishi, Fernanda Ayache; de Oliveira Motta Maia, Flávia; de Souza Santos, Itamar; de Almeida Lopes Monteiro da Cruz, Dina

    2017-06-01

    Triage is the first assessment and sorting process used to prioritize patients arriving in the emergency department (ED). As a triage tool, the Manchester Triage System (MTS) must have a high sensitivity to minimize the occurrence of under-triage, but must not compromise specificity to avoid the occurrence of overtriage. Sensitivity and specificity of the MTS can be calculated using the frequency of appropriately assigned clinical priority levels for patients presenting to the ED. However, although there are well established criteria for the prioritization of patients with suspected acute coronary syndrome (ACS), several studies have reported difficulties when evaluating patients with this condition. The objective of this review was to synthesize the best available evidence on assessing the sensitivity and specificity of the MTS for screening high-level priority adult patients presenting to the ED with ACS. The current review considered studies that evaluated the use of the MTS in the risk classification of adult patients in the ED. In this review, studies that investigated the priority level, as established by the MTS to screen patients under suspicion of ACS or the sensitivity and specificity of the MTS, for screening patients before the medical diagnosis of ACS were included. This review included both experimental and epidemiological study designs. The results were presented in a narrative synthesis. Six studies were appraised by the independent reviewers. All appraised studies enrolled a consecutive or random sample of patients and presented an overall moderate methodological quality, and all of them were included in this review. A total of 54,176 participants were included in the six studies. All studies were retrospective. Studies included in this review varied in content and data reporting. Only two studies reported sensitivity and specificity values or all the necessary data to calculate sensitivity and specificity. The remaining four studies presented either a sensitivity analysis or the number of true positives and false negatives. However, these four studies were conducted considering only data from patients diagnosed with ACS. Sensitivity values were relatively uniform among the studies: 0.70-0.80. A specificity of 0.59 was reported in the study including only patients with non-traumatic chest pain. On the other hand, in the study that included patients with any complaint, the specificity of MTS to screen patients with ACS was 0.97. The current review demonstrates that the MTS has a moderate sensitivity to evaluate patients with ACS. This may compromise time to treatment in the ED, an important variable in the prognosis of ACS. Atypical presentation of ACS, or high specificity, may also explain the moderate sensitivity demonstrated in this review. However, because of minimal data, it is not possible to confirm this hypothesis. It is difficult to determine the acceptable level of sensitivity or specificity to ensure that a certain triage system is safe.

  7. Dose-dependent testosterone sensitivity of the steroidal passport and GC-C-IRMS analysis in relation to the UGT2B17 deletion polymorphism.

    PubMed

    Strahm, Emmanuel; Mullen, Jenny E; Gårevik, Nina; Ericsson, Magnus; Schulze, Jenny J; Rane, Anders; Ekström, Lena

    2015-01-01

    The newly implemented Steroid Module of the Athlete Biological Passport has improved doping tests for steroids. A biomarker included in this passport is the urinary testosterone glucuronide to epitestosterone glucuronide (T/E) ratio, a ratio greatly affected by a deletion polymorphism in UGT2B17. Suspect urine doping tests are further analyzed with gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS) to determine the origin of the androgen. In this study, we investigated the sensitivity of the steroidal module and the IRMS analysis, in subjects administered with three doses of testosterone enanthate (500, 250, and 125 mg), in relation to the UGT2B17 polymorphism. All subjects carrying the UGT2B17 enzyme reached the traditionally used threshold, a T/E ratio of 4, after all three administered doses, whereas none of the subjects devoid of this enzyme reached a T/E of 4. On the other hand, using the athlete biological passport and IRMS analysis, all three doses could be detected to a high degree of sensitivity. The concentrations of all steroids included in the steroidal module were dose dependently increased, except for epitestosterone which decreased independent of dose. The decrease in epitestosterone was significantly associated with circulatory levels of testosterone post dose (rs =0.60 and p=0.007). In conclusion, these results demonstrate that administration of a single dose of 125-500 mg testosterone enanthate could be detected using the athlete biological passport, together with IRMS. Since IRMS is sensitive to testosterone doping independent of UGT2B17 genotype, also very small changes in the steroidal passport should be investigated with IRMS. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Prediction of the severity of acute pancreatitis on admission by urinary trypsinogen activation peptide: A meta-analysis

    PubMed Central

    Huang, Wei; Altaf, Kiran; Jin, Tao; Xiong, Jun-Jie; Wen, Li; Javed, Muhammad A; Johnstone, Marianne; Xue, Ping; Halloran, Christopher M; Xia, Qing

    2013-01-01

    AIM: To undertake a meta-analysis on the value of urinary trypsinogen activation peptide (uTAP) in predicting severity of acute pancreatitis on admission. METHODS: Major databases including Medline, Embase, Science Citation Index Expanded and the Cochrane Central Register of Controlled Trials in the Cochrane Library were searched to identify all relevant studies from January 1990 to January 2013. Pooled sensitivity, specificity and the diagnostic odds ratios (DORs) with 95%CI were calculated for each study and were compared to other systems/biomarkers if mentioned within the same study. Summary receiver-operating curves were conducted and the area under the curve (AUC) was evaluated. RESULTS: In total, six studies of uTAP with a cut-off value of 35 nmol/L were included in this meta-analysis. Overall, the pooled sensitivity and specificity of uTAP for predicting severity of acute pancreatitis, at time of admission, was 71% and 75%, respectively (AUC = 0.83, DOR = 8.67, 95%CI: 3.70-20.33). When uTAP was compared with plasma C-reactive protein, the pooled sensitivity, specificity, AUC and DOR were 0.64 vs 0.67, 0.77 vs 0.75, 0.82 vs 0.79 and 6.27 vs 6.32, respectively. Similarly, the pooled sensitivity, specificity, AUC and DOR of uTAP vs Acute Physiology and Chronic Health Evaluation II within the first 48 h of admission were found to be 0.64 vs 0.69, 0.77 vs 0.61, 0.82 vs 0.73 and 6.27 vs 4.61, respectively. CONCLUSION: uTAP has the potential to act as a stratification marker on admission for differentiating disease severity of acute pancreatitis. PMID:23901239

  9. Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhua; Zeng, Haishan; Kalia, Sunil; Lui, Harvey

    2017-02-01

    Background: Raman spectroscopy is a non-invasive optical technique which can measure molecular vibrational modes within tissue. A large-scale clinical study (n = 518) has demonstrated that real-time Raman spectroscopy could distinguish malignant from benign skin lesions with good diagnostic accuracy; this was validated by a follow-up independent study (n = 127). Objective: Most of the previous diagnostic algorithms have typically been based on analyzing the full band of the Raman spectra, either in the fingerprint or high wavenumber regions. Our objective in this presentation is to explore wavenumber selection based analysis in Raman spectroscopy for skin cancer diagnosis. Methods: A wavenumber selection algorithm was implemented using variably-sized wavenumber windows, which were determined by the correlation coefficient between wavenumbers. Wavenumber windows were chosen based on accumulated frequency from leave-one-out cross-validated stepwise regression or least and shrinkage selection operator (LASSO). The diagnostic algorithms were then generated from the selected wavenumber windows using multivariate statistical analyses, including principal component and general discriminant analysis (PC-GDA) and partial least squares (PLS). A total cohort of 645 confirmed lesions from 573 patients encompassing skin cancers, precancers and benign skin lesions were included. Lesion measurements were divided into training cohort (n = 518) and testing cohort (n = 127) according to the measurement time. Result: The area under the receiver operating characteristic curve (ROC) improved from 0.861-0.891 to 0.891-0.911 and the diagnostic specificity for sensitivity levels of 0.99-0.90 increased respectively from 0.17-0.65 to 0.20-0.75 by selecting specific wavenumber windows for analysis. Conclusion: Wavenumber selection based analysis in Raman spectroscopy improves skin cancer diagnostic specificity at high sensitivity levels.

  10. The diagnostic performance of perfusion MRI for differentiating glioma recurrence from pseudoprogression: A meta-analysis.

    PubMed

    Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo

    2017-03-01

    The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.

  11. SPS market analysis

    NASA Astrophysics Data System (ADS)

    Goff, H. C.

    1980-05-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  12. Real-time polymerase chain reaction for diagnosing infectious mononucleosis in pediatric patients: A systematic review and meta-analysis.

    PubMed

    Jiang, Sha-Yi; Yang, Jing-Wei; Shao, Jing-Bo; Liao, Xue-Lian; Lu, Zheng-Hua; Jiang, Hui

    2016-05-01

    In this meta-analysis, we evaluated the diagnostic role of Epstein-Barr virus deoxyribonucleic acid detection and quantitation in the serum of pediatric and young adult patients with infectious mononucleosis. The primary outcome of this meta-analysis was the sensitivity and specificity of Epstein-Barr virus (EBV) deoxyribonucleic acid (DNA) detection and quantitation using polymerase chain reaction (PCR). A systematic review and meta-analysis was performed by searching for articles that were published through September 24, 2014 in the following databases: Medline, Cochrane, EMBASE, and Google Scholar. The following keywords were used for the search: "Epstein-Barr virus," "infectious mononucleosis," "children/young adults/infant/pediatric," and "polymerase chain reaction or PCR." Three were included in this analysis. We found that for detection by PCR, the pooled sensitivity for detecting EBV DNA was 77% (95%CI, 66-86%) and the pooled specificity for was 98% (95%CI, 93-100%). Our findings indicate that this PCR-based assay has high specificity and good sensitivity for detecting of EBV DNA, indicating it may useful for identifying patients with infectious mononucleosis. This assay may also be helpful to identify young athletic patients or highly physically active pediatric patients who are at risk for a splenic rupture due to acute infectious mononucleosis. © 2015 Wiley Periodicals, Inc.

  13. "Doing Trust".

    PubMed

    Guillemin, Marilys; Gillam, Lynn; Barnard, Emma; Stewart, Paul; Walker, Hannah; Rosenthal, Doreen

    2016-10-01

    Trust in research is important but not well understood. We examine the ways that researchers understand and practice trust in research. Using a qualitative research design, we interviewed 19 researchers, including eight researchers involved in Australian Indigenous research. The project design focused on sensitive research including research involving vulnerable participants and sensitive research topics. Thematic analysis was used to analyze the data. We found that researchers' understanding of trust integrates both the conceptual and concrete; researchers understand trust in terms of how it relates to other similar concepts and how they practice trust in research. This provides a sound basis to better understand trust in research, as well as identifying mechanisms to regain trust when it is lost in research.

  14. Comparisons of Fatty Acid Taste Detection Thresholds in People Who Are Lean vs. Overweight or Obese: A Systematic Review and Meta-Analysis.

    PubMed

    Tucker, Robin M; Kaiser, Kathryn A; Parman, Mariel A; George, Brandon J; Allison, David B; Mattes, Richard D

    2017-01-01

    Given the increasing evidence that supports the ability of humans to taste non-esterified fatty acids (NEFA), recent studies have sought to determine if relationships exist between oral sensitivity to NEFA (measured as thresholds), food intake and obesity. Published findings suggest there is either no association or an inverse association. A systematic review and meta-analysis was conducted to determine if differences in fatty acid taste sensitivity or intensity ratings exist between individuals who are lean or obese. A total of 7 studies that reported measurement of taste sensations to non-esterified fatty acids by psychophysical methods (e.g.,studies using model systems rather than foods, detection thresholds as measured by a 3-alternative forced choice ascending methodology were included in the meta-analysis. Two other studies that measured intensity ratings to graded suprathreshold NEFA concentrations were evaluated qualitatively. No significant differences in fatty acid taste thresholds or intensity were observed. Thus, differences in fatty acid taste sensitivity do not appear to precede or result from obesity.

  15. Comparative analysis of methicillin-sensitive and resistant Staphylococcus aureus exposed to emodin based on proteomic profiling.

    PubMed

    Ji, Xiaoyu; Liu, Xiaoqiang; Peng, Yuanxia; Zhan, Ruoting; Xu, Hui; Ge, Xijin

    2017-12-09

    Emodin has a strong antibacterial activity, including methicillin-resistant Staphylococcus aureus (MRSA). However, the mechanism by which emodin induces growth inhibition against MRSA remains unclear. In this study, the isobaric tags for relative and absolute quantitation (iTRAQ) proteomics approach was used to investigate the modes of action of emodin on a MRSA isolate and methicillin-sensitive S. aureus ATCC29213(MSSA). Proteomic analysis showed that expression levels of 145 and 122 proteins were changed significantly in MRSA and MSSA, respectively, after emodin treatment. Comparative analysis of the functions of differentially expressed proteins between the two strains was performed via bioinformatics tools blast2go and STRING database. Proteins related to pyruvate pathway imbalance induction, protein synthesis inhibition, and DNA synthesis suppression were found in both methicillin-sensitive and resistant strains. Moreover, Interference proteins related to membrane damage mechanism were also observed in MRSA. Our findings indicate that emodin is a potential antibacterial agent targeting MRSA via multiple mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Phase sensitive spectral domain interferometry for label free biomolecular interaction analysis and biosensing applications

    NASA Astrophysics Data System (ADS)

    Chirvi, Sajal

    Biomolecular interaction analysis (BIA) plays vital role in wide variety of fields, which include biomedical research, pharmaceutical industry, medical diagnostics, and biotechnology industry. Study and quantification of interactions between natural biomolecules (proteins, enzymes, DNA) and artificially synthesized molecules (drugs) is routinely done using various labeled and label-free BIA techniques. Labeled BIA (Chemiluminescence, Fluorescence, Radioactive) techniques suffer from steric hindrance of labels on interaction site, difficulty of attaching labels to molecules, higher cost and time of assay development. Label free techniques with real time detection capabilities have demonstrated advantages over traditional labeled techniques. The gold standard for label free BIA is surface Plasmon resonance (SPR) that detects and quantifies the changes in refractive index of the ligand-analyte complex molecule with high sensitivity. Although SPR is a highly sensitive BIA technique, it requires custom-made sensor chips and is not well suited for highly multiplexed BIA required in high throughput applications. Moreover implementation of SPR on various biosensing platforms is limited. In this research work spectral domain phase sensitive interferometry (SD-PSI) has been developed for label-free BIA and biosensing applications to address limitations of SPR and other label free techniques. One distinct advantage of SD-PSI compared to other label-free techniques is that it does not require use of custom fabricated biosensor substrates. Laboratory grade, off-the-shelf glass or plastic substrates of suitable thickness with proper surface functionalization are used as biosensor chips. SD-PSI is tested on four separate BIA and biosensing platforms, which include multi-well plate, flow cell, fiber probe with integrated optics and fiber tip biosensor. Sensitivity of 33 ng/ml for anti-IgG is achieved using multi-well platform. Principle of coherence multiplexing for multi-channel label-free biosensing applications is introduced. Simultaneous interrogation of multiple biosensors is achievable with a single spectral domain phase sensitive interferometer by coding the individual sensograms in coherence-multiplexed channels. Experimental results demonstrating multiplexed quantitative biomolecular interaction analysis of antibodies binding to antigen coated functionalized biosensor chip surfaces on different platforms are presented.

  17. Three-year financial analysis of pharmacy services at an independent community pharmacy.

    PubMed

    Doucette, William R; McDonough, Randal P; Mormann, Megan M; Vaschevici, Renata; Urmie, Julie M; Patterson, Brandon J

    2012-01-01

    To assess the financial performance of pharmacy services including vaccinations, cholesterol screenings, medication therapy management (MTM), adherence management services, employee health fairs, and compounding services provided by an independent community pharmacy. Three years (2008-10) of pharmacy records were examined to determine the total revenue and costs of each service. Costs included products, materials, labor, marketing, overhead, equipment, reference materials, and fax/phone usage. Costs were allocated to each service using accepted principles (e.g., time for labor). Depending on the service, the total revenue was calculated by multiplying the frequency of the service by the revenue per patient or by adding the total revenue received. A sensitivity analysis was conducted for the adherence management services to account for average dispensing net profit. 7 of 11 pharmacy services showed a net profit each year. Those services include influenza and herpes zoster immunization services, MTM, two adherence management services, employee health fairs, and prescription compounding services. The services that realized a net loss included the pneumococcal immunization service, cholesterol screenings, and two adherence management services. The sensitivity analysis showed that all adherence services had a net gain when average dispensing net profit was included. Most of the pharmacist services had an annual positive net gain. It seems likely that these services can be sustained. Further cost management, such as reducing labor costs, could improve the viability of services with net losses. However, even with greater efficiency, external factors such as competition and reimbursement challenge the sustainability of these services.

  18. Branched-chain amino acids for people with hepatic encephalopathy.

    PubMed

    Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik

    2015-02-25

    Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index on 2 October 2014. We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. Based on the combined Cochrane Hepato-Biliary Group score, we classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.

  19. Branched-chain amino acids for people with hepatic encephalopathy.

    PubMed

    Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik

    2017-05-18

    Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded and Conference Proceedings Citation Index - Science, and LILACS (May 2017). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.

  20. Branched-chain amino acids for people with hepatic encephalopathy.

    PubMed

    Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik

    2015-09-17

    Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index (August 2015). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.

  1. Smoking increases the risk of diabetic foot amputation: A meta-analysis.

    PubMed

    Liu, Min; Zhang, Wei; Yan, Zhaoli; Yuan, Xiangzhen

    2018-02-01

    Accumulating evidence suggests that smoking is associated with diabetic foot amputation. However, the currently available results are inconsistent and controversial. Therefore, the present study performed a meta-analysis to systematically review the association between smoking and diabetic foot amputation and to investigate the risk factors of diabetic foot amputation. Public databases, including PubMed and Embase, were searched prior to 29th February 2016. The heterogeneity was assessed using the Cochran's Q statistic and the I 2 statistic, and odds ratio (OR) and 95% confidence interval (CI) were calculated and pooled appropriately. Sensitivity analysis was performed to evaluate the stability of the results. In addition, Egger's test was applied to assess any potential publication bias. Based on the research, a total of eight studies, including five cohort studies and three case control studies were included. The data indicated that smoking significantly increased the risk of diabetic foot amputation (OR=1.65; 95% CI, 1.09-2.50; P<0.0001) compared with non-smoking. Sensitivity analysis demonstrated that the pooled analysis did not vary substantially following the exclusion of any one study. Additionally, there was no evidence of publication bias (Egger's test, t=0.1378; P=0.8958). Furthermore, no significant difference was observed between the minor and major amputation groups in patients who smoked (OR=0.79; 95% CI, 0.24-2.58). The results of the present meta-analysis suggested that smoking is a notable risk factor for diabetic foot amputation. Smoking cessation appears to reduce the risk of diabetic foot amputation.

  2. On approaches to analyze the sensitivity of simulated hydrologic fluxes to model parameters in the community land model

    DOE PAGES

    Bao, Jie; Hou, Zhangshuan; Huang, Maoyi; ...

    2015-12-04

    Here, effective sensitivity analysis approaches are needed to identify important parameters or factors and their uncertainties in complex Earth system models composed of multi-phase multi-component phenomena and multiple biogeophysical-biogeochemical processes. In this study, the impacts of 10 hydrologic parameters in the Community Land Model on simulations of runoff and latent heat flux are evaluated using data from a watershed. Different metrics, including residual statistics, the Nash-Sutcliffe coefficient, and log mean square error, are used as alternative measures of the deviations between the simulated and field observed values. Four sensitivity analysis (SA) approaches, including analysis of variance based on the generalizedmore » linear model, generalized cross validation based on the multivariate adaptive regression splines model, standardized regression coefficients based on a linear regression model, and analysis of variance based on support vector machine, are investigated. Results suggest that these approaches show consistent measurement of the impacts of major hydrologic parameters on response variables, but with differences in the relative contributions, particularly for the secondary parameters. The convergence behaviors of the SA with respect to the number of sampling points are also examined with different combinations of input parameter sets and output response variables and their alternative metrics. This study helps identify the optimal SA approach, provides guidance for the calibration of the Community Land Model parameters to improve the model simulations of land surface fluxes, and approximates the magnitudes to be adjusted in the parameter values during parametric model optimization.« less

  3. Breast-specific gamma camera imaging with 99mTc-MIBI has better diagnostic performance than magnetic resonance imaging in breast cancer patients: A meta-analysis.

    PubMed

    Zhang, Aimi; Li, Panli; Liu, Qiufang; Song, Shaoli

    2017-01-01

    This study aimed to evaluate the diagnostic role of breast-specific gamma camera imaging (BSGI) with technetium-99m-methoxy isobutyl isonitrile ( 99m Tc-MIBI) and magnetic resonance imaging (MRI) in patients with breast cancer through a meta-analysis. Three reviewers searched articles published in medical journals before June 2016 in MEDLINE, EMBASE and Springer Databases; the references listed in original articles were also retrieved. We used the quality assessment of diagnostic accuracy studies (QUADAS) tool to assess the quality of the included studies. Heterogeneity, pooled sensitivity and specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR) and summary receiver operating characteristic (SROC) curves were calculated by Meta-DiSc software to estimate the diagnostic performance of BSGI and MRI. Ten studies with 517 patients were included after meeting the inclusion criteria. We did a subgroup analysis of the same data type. The pooled sensitivities of BSGI and MRI were: 0.84 (95% CI, 0.79-0.88) and 0.89 (95% CI, 0.84-0.92) respectively, and the pooled specificities of BSGI and MRI were: 0.82 (95% CI, 0.74-0.88) and 0.39 (95% CI, 0.30-0.49) respectively. The areas under the SROC curve of BSGI and MRI were 0.93 and 0.72 respectively. The results of our meta-analysis indicated that compared with MRI, BSGI has similar sensitivity, higher specificity, better diagnostic performance, and can be widely used in clinical practice.

  4. HF Propagation sensitivity study and system performance analysis with the Air Force Coverage Analysis Program (AFCAP)

    NASA Astrophysics Data System (ADS)

    Caton, R. G.; Colman, J. J.; Parris, R. T.; Nickish, L.; Bullock, G.

    2017-12-01

    The Air Force Research Laboratory, in collaboration with NorthWest Research Associates, is developing advanced software capabilities for high fidelity simulations of high frequency (HF) sky wave propagation and performance analysis of HF systems. Based on the HiCIRF (High-frequency Channel Impulse Response Function) platform [Nickisch et. al, doi:10.1029/2011RS004928], the new Air Force Coverage Analysis Program (AFCAP) provides the modular capabilities necessary for a comprehensive sensitivity study of the large number of variables which define simulations of HF propagation modes. In this paper, we report on an initial exercise of AFCAP to analyze the sensitivities of the tool to various environmental and geophysical parameters. Through examination of the channel scattering function and amplitude-range-Doppler output on two-way propagation paths with injected target signals, we will compare simulated returns over a range of geophysical conditions as well as varying definitions for environmental noise, meteor clutter, and sea state models for Bragg backscatter. We also investigate the impacts of including clutter effects due to field-aligned backscatter from small scale ionization structures at varied levels of severity as defined by the climatologically WideBand Model (WBMOD). In the absence of additional user provided information, AFCAP relies on International Reference Ionosphere (IRI) model to define the ionospheric state for use in 2D ray tracing algorithms. Because the AFCAP architecture includes the option for insertion of a user defined gridded ionospheric representation, we compare output from the tool using the IRI and ionospheric definitions from assimilative models such as GPSII (GPS Ionospheric Inversion).

  5. Nonlinear mathematical modeling and sensitivity analysis of hydraulic drive unit

    NASA Astrophysics Data System (ADS)

    Kong, Xiangdong; Yu, Bin; Quan, Lingxiao; Ba, Kaixian; Wu, Liujie

    2015-09-01

    The previous sensitivity analysis researches are not accurate enough and also have the limited reference value, because those mathematical models are relatively simple and the change of the load and the initial displacement changes of the piston are ignored, even experiment verification is not conducted. Therefore, in view of deficiencies above, a nonlinear mathematical model is established in this paper, including dynamic characteristics of servo valve, nonlinear characteristics of pressure-flow, initial displacement of servo cylinder piston and friction nonlinearity. The transfer function block diagram is built for the hydraulic drive unit closed loop position control, as well as the state equations. Through deriving the time-varying coefficient items matrix and time-varying free items matrix of sensitivity equations respectively, the expression of sensitivity equations based on the nonlinear mathematical model are obtained. According to structure parameters of hydraulic drive unit, working parameters, fluid transmission characteristics and measured friction-velocity curves, the simulation analysis of hydraulic drive unit is completed on the MATLAB/Simulink simulation platform with the displacement step 2 mm, 5 mm and 10 mm, respectively. The simulation results indicate that the developed nonlinear mathematical model is sufficient by comparing the characteristic curves of experimental step response and simulation step response under different constant load. Then, the sensitivity function time-history curves of seventeen parameters are obtained, basing on each state vector time-history curve of step response characteristic. The maximum value of displacement variation percentage and the sum of displacement variation absolute values in the sampling time are both taken as sensitivity indexes. The sensitivity indexes values above are calculated and shown visually in histograms under different working conditions, and change rules are analyzed. Then the sensitivity indexes values of four measurable parameters, such as supply pressure, proportional gain, initial position of servo cylinder piston and load force, are verified experimentally on test platform of hydraulic drive unit, and the experimental research shows that the sensitivity analysis results obtained through simulation are approximate to the test results. This research indicates each parameter sensitivity characteristics of hydraulic drive unit, the performance-affected main parameters and secondary parameters are got under different working conditions, which will provide the theoretical foundation for the control compensation and structure optimization of hydraulic drive unit.

  6. Accuracy of early detection of colorectal tumours by stool methylation markers: A meta-analysis

    PubMed Central

    Zhang, Hu; Qi, Jian; Wu, Ya-Qiong; Zhang, Ping; Jiang, Jun; Wang, Qi-Xian; Zhu, You-Qing

    2014-01-01

    AIM: To evaluate the accuracy of methylation of genes in stool samples for diagnosing colorectal tumours. METHODS: Electronic databases including PubMed, Web of Science, Chinese Journals Full-Text Database and Wanfang Journals Full-Text Database were searched to find relevant original articles about methylated genes to be used in diagnosing colorectal tumours. A quality assessment of diagnostic accuracy studies tool (QADAS) was used to evaluate the quality of the included articles, and the Meta-disc 1.4 and SPSS 13.0 software programs were used for data analysis. RESULTS: Thirty-seven articles met the inclusion criteria, and 4484 patients were included. The sensitivity and specificity for the detection of colorectal cancer (CRC) were 73% (95%CI: 71%-75%) and 92% (95%CI: 90%-93%), respectively. For adenoma, the sensitivity and specificity were 51% (95%CI: 47%-54%) and 92% (95%CI: 90%-93%), respectively. Pooled diagnostic performance of SFRP2 methylation for CRC provided the following results: the sensitivity was 79% (95%CI: 75%-82%), the specificity was 93% (95%CI: 90%-96%), the diagnostic OR was 47.57 (95%CI: 20.08-112.72), the area under the curve was 0.9565. Additionally, the results of accuracy of SFRP2 methylation for detecting colorectal adenomas were as follows: sensitivity was 43% (95%CI: 38%-49%), specificity was 94% (95%CI: 91%-97%), the diagnostic OR was 11.06 (95%CI: 5.77-21.18), and the area under the curve was 0.9563. CONCLUSION: Stool-based DNA testing may be useful for noninvasively diagnosing colorectal tumours and SFRP2 methylation is a promising marker that has great potential in early CRC diagnosis. PMID:25320544

  7. Accuracy of early detection of colorectal tumours by stool methylation markers: a meta-analysis.

    PubMed

    Zhang, Hu; Qi, Jian; Wu, Ya-Qiong; Zhang, Ping; Jiang, Jun; Wang, Qi-Xian; Zhu, You-Qing

    2014-10-14

    To evaluate the accuracy of methylation of genes in stool samples for diagnosing colorectal tumours. Electronic databases including PubMed, Web of Science, Chinese Journals Full-Text Database and Wanfang Journals Full-Text Database were searched to find relevant original articles about methylated genes to be used in diagnosing colorectal tumours. A quality assessment of diagnostic accuracy studies tool (QADAS) was used to evaluate the quality of the included articles, and the Meta-disc 1.4 and SPSS 13.0 software programs were used for data analysis. Thirty-seven articles met the inclusion criteria, and 4484 patients were included. The sensitivity and specificity for the detection of colorectal cancer (CRC) were 73% (95%CI: 71%-75%) and 92% (95%CI: 90%-93%), respectively. For adenoma, the sensitivity and specificity were 51% (95%CI: 47%-54%) and 92% (95%CI: 90%-93%), respectively. Pooled diagnostic performance of SFRP2 methylation for CRC provided the following results: the sensitivity was 79% (95%CI: 75%-82%), the specificity was 93% (95%CI: 90%-96%), the diagnostic OR was 47.57 (95%CI: 20.08-112.72), the area under the curve was 0.9565. Additionally, the results of accuracy of SFRP2 methylation for detecting colorectal adenomas were as follows: sensitivity was 43% (95%CI: 38%-49%), specificity was 94% (95%CI: 91%-97%), the diagnostic OR was 11.06 (95%CI: 5.77-21.18), and the area under the curve was 0.9563. Stool-based DNA testing may be useful for noninvasively diagnosing colorectal tumours and SFRP2 methylation is a promising marker that has great potential in early CRC diagnosis.

  8. Fusion-neutron-yield, activation measurements at the Z accelerator: design, analysis, and sensitivity.

    PubMed

    Hahn, K D; Cooper, G W; Ruiz, C L; Fehl, D L; Chandler, G A; Knapp, P F; Leeper, R J; Nelson, A J; Smelser, R M; Torres, J A

    2014-04-01

    We present a general methodology to determine the diagnostic sensitivity that is directly applicable to neutron-activation diagnostics fielded on a wide variety of neutron-producing experiments, which include inertial-confinement fusion (ICF), dense plasma focus, and ion beam-driven concepts. This approach includes a combination of several effects: (1) non-isotropic neutron emission; (2) the 1/r(2) decrease in neutron fluence in the activation material; (3) the spatially distributed neutron scattering, attenuation, and energy losses due to the fielding environment and activation material itself; and (4) temporally varying neutron emission. As an example, we describe the copper-activation diagnostic used to measure secondary deuterium-tritium fusion-neutron yields on ICF experiments conducted on the pulsed-power Z Accelerator at Sandia National Laboratories. Using this methodology along with results from absolute calibrations and Monte Carlo simulations, we find that for the diagnostic configuration on Z, the diagnostic sensitivity is 0.037% ± 17% counts/neutron per cm(2) and is ∼ 40% less sensitive than it would be in an ideal geometry due to neutron attenuation, scattering, and energy-loss effects.

  9. Using Dynamic Sensitivity Analysis to Assess Testability

    NASA Technical Reports Server (NTRS)

    Voas, Jeffrey; Morell, Larry; Miller, Keith

    1990-01-01

    This paper discusses sensitivity analysis and its relationship to random black box testing. Sensitivity analysis estimates the impact that a programming fault at a particular location would have on the program's input/output behavior. Locations that are relatively \\"insensitive" to faults can render random black box testing unlikely to uncover programming faults. Therefore, sensitivity analysis gives new insight when interpreting random black box testing results. Although sensitivity analysis is computationally intensive, it requires no oracle and no human intervention.

  10. Automating calibration, sensitivity and uncertainty analysis of complex models using the R package Flexible Modeling Environment (FME): SWAT as an example

    USGS Publications Warehouse

    Wu, Y.; Liu, S.

    2012-01-01

    Parameter optimization and uncertainty issues are a great challenge for the application of large environmental models like the Soil and Water Assessment Tool (SWAT), which is a physically-based hydrological model for simulating water and nutrient cycles at the watershed scale. In this study, we present a comprehensive modeling environment for SWAT, including automated calibration, and sensitivity and uncertainty analysis capabilities through integration with the R package Flexible Modeling Environment (FME). To address challenges (e.g., calling the model in R and transferring variables between Fortran and R) in developing such a two-language coupling framework, 1) we converted the Fortran-based SWAT model to an R function (R-SWAT) using the RFortran platform, and alternatively 2) we compiled SWAT as a Dynamic Link Library (DLL). We then wrapped SWAT (via R-SWAT) with FME to perform complex applications including parameter identifiability, inverse modeling, and sensitivity and uncertainty analysis in the R environment. The final R-SWAT-FME framework has the following key functionalities: automatic initialization of R, running Fortran-based SWAT and R commands in parallel, transferring parameters and model output between SWAT and R, and inverse modeling with visualization. To examine this framework and demonstrate how it works, a case study simulating streamflow in the Cedar River Basin in Iowa in the United Sates was used, and we compared it with the built-in auto-calibration tool of SWAT in parameter optimization. Results indicate that both methods performed well and similarly in searching a set of optimal parameters. Nonetheless, the R-SWAT-FME is more attractive due to its instant visualization, and potential to take advantage of other R packages (e.g., inverse modeling and statistical graphics). The methods presented in the paper are readily adaptable to other model applications that require capability for automated calibration, and sensitivity and uncertainty analysis.

  11. Optimizing sensitivity to γ with B0→D K+π-, D →KS0π+π- double Dalitz plot analysis

    NASA Astrophysics Data System (ADS)

    Craik, D.; Gershon, T.; Poluektov, A.

    2018-03-01

    Two of the most powerful methods currently used to determine the angle γ of the CKM Unitarity Triangle exploit B+→D K+, D →KS0π+π- decays and B0→D K+π-, D →K+K-, π+π- decays. It is possible to combine the strengths of both approaches in a "double Dalitz plot" analysis of B0→D K+π-, D →KS0π+π- decays. The potential sensitivity of such an analysis is investigated in the light of recently published experimental information on the B0→D K+π- decay. The formalism is also expanded, compared to previous discussions in the literature, to allow B0→D K+π- with any subsequent D decay to be included.

  12. Basic research for the geodynamics program

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The mathematical models of space very long base interferometry (VLBI) observables suitable for least squares covariance analysis were derived and estimatability problems inherent in the space VLBI system were explored, including a detailed rank defect analysis and sensitivity analysis. An important aim is to carry out a comparative analysis of the mathematical models of the ground-based VLBI and space VLBI observables in order to describe the background in detail. Computer programs were developed in order to check the relations, assess errors, and analyze sensitivity. In order to investigate the estimatability of different geodetic and geodynamic parameters from the space VLBI observables, the mathematical models for time delay and time delay rate observables of space VLBI were analytically derived along with the partial derivatives with respect to the parameters. Rank defect analysis was carried out both by analytical and numerical testing of linear dependencies between the columns of the normal matrix thus formed. Definite conclusions were formed about the rank defects in the system.

  13. Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camins, I.; Shinn, J.H.

    We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive amore » measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab.« less

  14. Diagnosis of glenoid labral tears using 3-tesla MRI vs. 3-tesla MRA: a systematic review and meta-analysis.

    PubMed

    Ajuied, Adil; McGarvey, Ciaran P; Harb, Ziad; Smith, Christian C; Houghton, Russell P; Corbett, Steven A

    2018-05-01

    Various protocols exist for magnetic resonance arthrogram (MRA) of the shoulder, including 3D isotropic scanning and positioning in neutral (2D neutral MRA), or abduction-external-rotation (ABER). MRA does not improve diagnostic accuracy for labral tears when compared to magnetic resonance imaging (MRI) performed using 3-Tesla (3T) magnets. Systematic review of the Cochrane, MEDLINE, and PubMed databases according to PRISMA guidelines. Included studies compared 3T MRI or 3T MRA (index tests) to arthroscopic findings (reference test). Methodological appraisal performed using QUADAS-2. Pooled sensitivity and specificity were calculated. Ten studies including 929 patients were included. Index test bias and applicability were a concern in the majority of studies. The use of arthroscopy as the reference test raised concern of verification bias in all studies. For anterior labral lesions, 3T MRI was less sensitive (0.83 vs. 0.87 p = 0.083) than 3T 2D neutral MRA. Compared to 3T 2D neutral MRA, both 3T 3D Isotropic MRA and 3T ABER MRA significantly improved sensitivity (0.87 vs. 0.95 vs. 0.94). For SLAP lesions, 3T 2D neutral MRA was of similar sensitivity to 3T MRI (0.84 vs. 0.83, p = 0.575), but less specific (0.99 vs. 0.92 p < 0.0001). For posterior labral lesions, 3T 2D neutral MRA had greater sensitivity than 3T 3D Isotropic MRA and 3T MRI (0.90 vs. 0.83 vs. 0.83). At 3-T, MRA improved sensitivity for diagnosis of anterior and posterior labral lesions, but reduced specificity in diagnosis of SLAP tears. 3T MRA with ABER positioning further improved sensitivity in diagnosis of anterior labral tears. IV.

  15. Dietary patterns and the insulin resistance phenotype among non-diabetic adults

    USDA-ARS?s Scientific Manuscript database

    Background: Information on the relation between dietary patterns derived by cluster analysis and insulin resistance is scarce. Objective: To compare insulin resistance phenotypes, including waist circumference, body mass index, fasting and 2-hour post-challenge insulin, insulin sensitivity index (I...

  16. Sensitivity analysis for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    Background/Question/Methods Regulatory agencies assess risks to honey bees from pesticides through a tiered process that includes predictive modeling with empirical toxicity and chemical data of pesticides as a line of evidence. We evaluate the Varroapop colony model, proposed by...

  17. Sample pooling for real-time PCR detection and virulence determination of the footrot pathogen Dichelobacter nodosus.

    PubMed

    Frosth, Sara; König, Ulrika; Nyman, Ann-Kristin; Aspán, Anna

    2017-09-01

    Dichelobacter nodosus is the principal cause of ovine footrot and strain virulence is an important factor in disease severity. Therefore, detection and virulence determination of D. nodosus is important for proper diagnosis of the disease. Today this is possible by real-time PCR analysis. Analysis of large numbers of samples is costly and laborious; therefore, pooling of individual samples is common in surveillance programs. However, pooling can reduce the sensitivity of the method. The aim of this study was to develop a pooling method for real-time PCR analysis that would allow sensitive detection and simultaneous virulence determination of D. nodosus. A total of 225 sheep from 17 flocks were sampled using ESwabs within the Swedish Footrot Control Program in 2014. Samples were first analysed individually and then in pools of five by real-time PCR assays targeting the 16S rRNA and aprV2/B2 genes of D. nodosus. Each pool consisted of four negative and one positive D. nodosus samples with varying amounts of the bacterium. In the individual analysis, 61 (27.1%) samples were positive in the 16S rRNA and the aprV2/B2 PCR assays and 164 (72.9%) samples were negative. All samples positive in the aprV2/B2 PCR-assay were of aprB2 variant. The pooled analysis showed that all 41 pools were also positive for D. nodosus 16S rRNA and the aprB2 variant. The diagnostic sensitivity for pooled and individual samples was therefore similar. Our method includes concentration of the bacteria before DNA-extraction. This may account for the maintenance of diagnostic sensitivity. Diagnostic sensitivity in the real-time PCR assays of the pooled samples were comparable to the sensitivity obtained for individually analysed samples. Even sub-clinical infections were able to be detected in the pooled PCR samples which is important for control of the disease. This method may therefore be implemented in footrot control programs where it can replace analysis of individual samples.

  18. Saugus River and Tributaries Flood Damage Reduction Study: Lynn, Malden, Revere and Saugus, Massachusetts. Section 1. Feasibility Report.

    DTIC Science & Technology

    1989-12-01

    57 Table 5 Sensitivity Analysis - Point of Pines LPP 61 Table 6 Plan Comparison 64 Table 7 NED Plan Project Costs 96 Table 8 Estimated Operation...Costs 99 Table 13 Selected Plan/Estimated Annual Benefits 101 Table 14 Comparative Impacts - NED Regional Floodgate Plan 102 Table 15 Economic Analysis ...Includes detailed descriptions, plans and profiles and design considerations of the selected plan; coastal analysis of the shorefront; detailed project

  19. Computer aided analysis and optimization of mechanical system dynamics

    NASA Technical Reports Server (NTRS)

    Haug, E. J.

    1984-01-01

    The purpose is to outline a computational approach to spatial dynamics of mechanical systems that substantially enlarges the scope of consideration to include flexible bodies, feedback control, hydraulics, and related interdisciplinary effects. Design sensitivity analysis and optimization is the ultimate goal. The approach to computer generation and solution of the system dynamic equations and graphical methods for creating animations as output is outlined.

  20. Navigation and Dispersion Analysis of the First Orion Exploration Mission

    NASA Technical Reports Server (NTRS)

    Zanetti, Renato; D'Souza, Christopher

    2015-01-01

    This paper seeks to present the Orion EM-1 Linear Covariance Analysis for the DRO mission. The delta V statistics for each maneuver are presented. Included in the memo are several sensitivity analyses: variation in the time of OTC-1 (the first outbound correction maneuver), variation in the accuracy of the trans-Lunar injection, and variation in the length of the optical navigation passes.

  1. Determining the Best-Fit FPGA for a Space Mission: An Analysis of Cost, SEU Sensitivity,and Reliability

    NASA Technical Reports Server (NTRS)

    Berg, Melanie; LaBel, Ken

    2007-01-01

    This viewgraph presentation reviews the selection of the optimum Field Programmable Gate Arrays (FPGA) for space missions. Included in this review is a discussion on differentiating amongst various FPGAs, cost analysis of the various options, the investigation of radiation effects, an expansion of the evaluation criteria, and the application of the evaluation criteria to the selection process.

  2. Effects of aircraft noise on the equilibrium of airport residents: Testing and utilization of a new methodology

    NASA Technical Reports Server (NTRS)

    Francois, J.

    1981-01-01

    The focus of the investigation is centered around two main themes: an analysis of the effects of aircraft noise on the psychological and physiological equilibrium of airport residents; and an analysis of the sources of variability of sensitivity to noise. The methodology used is presented. Nine statistical tables are included, along with a set of conclusions.

  3. Comparative Efficacy of Tongxinluo Capsule and Beta-Blockers in Treating Angina Pectoris: Meta-Analysis of Randomized Controlled Trials.

    PubMed

    Jia, Yongliang; Leung, Siu-wai

    2015-11-01

    There have been no systematic reviews, let alone meta-analyses, of randomized controlled trials (RCTs) comparing tongxinluo capsule (TXL) and beta-blockers in treating angina pectoris. This study aimed to evaluate the efficacy of TXL and beta-blockers in treating angina pectoris by a meta-analysis of eligible RCTs. The RCTs comparing TXL with beta-blockers (including metoprolol) in treating angina pectoris were searched and retrieved from databases including PubMed, Chinese National Knowledge Infrastructure, and WanFang Data. Eligible RCTs were selected according to prespecified criteria. Meta-analysis was performed on the odds ratios (OR) of symptomatic and electrocardiographic (ECG) improvements after treatment. Subgroup analysis, sensitivity analysis, meta-regression, and publication biases analysis were conducted to evaluate the robustness of the results. Seventy-three RCTs published between 2000 and 2014 with 7424 participants were eligible. Overall ORs comparing TXL with beta-blockers were 3.40 (95% confidence interval [CI], 2.97-3.89; p<0.0001) for symptomatic improvement and 2.63 (95% CI, 2.29-3.02; p<0.0001) for ECG improvement. Subgroup analysis and sensitivity analysis found no statistically significant dependence of overall ORs on specific study characteristics except efficacy criteria. Meta-regression found no significant except sample sizes for data on symptomatic improvement. Publication biases were statistically significant. TXL seems to be more effective than beta-blockers in treating angina pectoris, on the basis of the eligible RCTs. Further RCTs are warranted to reduce publication bias and verify efficacy.

  4. Nanowire size dependence on sensitivity of silicon nanowire field-effect transistor-based pH sensor

    NASA Astrophysics Data System (ADS)

    Lee, Ryoongbin; Kwon, Dae Woong; Kim, Sihyun; Kim, Sangwan; Mo, Hyun-Sun; Kim, Dae Hwan; Park, Byung-Gook

    2017-12-01

    In this study, we investigated the effects of nanowire size on the current sensitivity of silicon nanowire (SiNW) ion-sensitive field-effect transistors (ISFETs). The changes in on-current (I on) and resistance according to pH were measured in fabricated SiNW ISFETs of various lengths and widths. As a result, it was revealed that the sensitivity expressed as relative I on change improves as the width decreases. Through technology computer-aided design (TCAD) simulation analysis, the width dependence on the relative I on change can be explained by the observation that the target molecules located at the edge region along the channel width have a stronger effect on the sensitivity as the SiNW width is reduced. Additionally, the length dependence on the sensitivity can be understood in terms of the resistance ratio of the fixed parasitic resistance, including source/drain resistance, to the varying channel resistance as a function of channel length.

  5. Design and characterization of planar capacitive imaging probe based on the measurement sensitivity distribution

    NASA Astrophysics Data System (ADS)

    Yin, X.; Chen, G.; Li, W.; Huthchins, D. A.

    2013-01-01

    Previous work indicated that the capacitive imaging (CI) technique is a useful NDE tool which can be used on a wide range of materials, including metals, glass/carbon fibre composite materials and concrete. The imaging performance of the CI technique for a given application is determined by design parameters and characteristics of the CI probe. In this paper, a rapid method for calculating the whole probe sensitivity distribution based on the finite element model (FEM) is presented to provide a direct view of the imaging capabilities of the planar CI probe. Sensitivity distributions of CI probes with different geometries were obtained. Influencing factors on sensitivity distribution were studied. Comparisons between CI probes with point-to-point triangular electrode pair and back-to-back triangular electrode pair were made based on the analysis of the corresponding sensitivity distributions. The results indicated that the sensitivity distribution could be useful for optimising the probe design parameters and predicting the imaging performance.

  6. RSV-hRV co-infection is a risk factor for recurrent bronchial obstruction and early sensitization 3 years after bronchiolitis.

    PubMed

    Amat, Flore; Plantard, Chloé; Mulliez, Aurélien; Petit, Isabelle; Rochette, Emmanuelle; Verdan, Matthieu; Henquell, Cécile; Labbé, Guillaume; Heraud, Marie Christine; Evrard, Bertrand; Labbé, André

    2018-05-01

    To assess risk factors of recurrent bronchial obstruction and allergic sensitization 3 years after an episode of acute bronchiolitis, whether after ambulatory care treatment or hospitalization. A monocentric prospective longitudinal study including infants aged under 1 year with acute bronchiolitis was performed, with clinical (severity score), biological (serum Krebs von den Lungen 6 antigen), and viral (14 virus by naso-pharyngeal suction detection) assessments. Follow-up included a quaterly telephone interview, and a final clinical examination at 3 years. Biological markers of atopy were also measured in peripheral blood, including specific IgEs towards aero- and food allergens. Complete data were available for 154 children. 46.8% of them had recurrent wheezing (RW). No difference was found according to initial severity, care at home or in the hospital, respiratory virus involved, or existence of co-infection. A familial history of atopy was identified as a risk factor for recurrent bronchial obstruction (60% for RW infants versus 39%, P = 0.02), as living in an apartment (35% versus 15%, P = 0.002). 18.6% of the infants were sensitized, with 48.1% of them sensitized to aeroallergens and 81.5% to food allergens. Multivariate analysis confirmed that a familial history of atopy (P = 0.02) and initial co-infection RSV-hRV (P = 0.02) were correlated with the risk of sensitization to aeroallergens at 3 years. Familial history of atopy and RSV-hRV co-infection are risk factors for recurrent bronchial obstruction and sensitization. © 2018 Wiley Periodicals, Inc.

  7. Test validity and performance validity: considerations in providing a framework for development of an ability-focused neuropsychological test battery.

    PubMed

    Larrabee, Glenn J

    2014-11-01

    Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  9. Validity and consistency assessment of accident analysis methods in the petroleum industry.

    PubMed

    Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza

    2017-11-17

    Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.

  10. Permeability Surface Area Product Using Perfusion Computed Tomography Is a Valuable Prognostic Factor in Glioblastomas Treated with Radiotherapy Plus Concomitant and Adjuvant Temozolomide.

    PubMed

    Saito, Taiichi; Sugiyama, Kazuhiko; Ikawa, Fusao; Yamasaki, Fumiyuki; Ishifuro, Minoru; Takayasu, Takeshi; Nosaka, Ryo; Nishibuchi, Ikuno; Muragaki, Yoshihiro; Kawamata, Takakazu; Kurisu, Kaoru

    2017-01-01

    The current standard treatment protocol for patients with newly diagnosed glioblastoma (GBM) includes surgery, radiotherapy, and concomitant and adjuvant temozolomide (TMZ). We hypothesized that the permeability surface area product (PS) from a perfusion computed tomography (PCT) study is associated with sensitivity to TMZ. The aim of this study was to determine whether PS values were correlated with prognosis of GBM patients who received the standard treatment protocol. This study included 36 patients with GBM that were newly diagnosed between October 2005 and September 2014 and who underwent preoperative PCT study and the standard treatment protocol. We measured the maximum value of relative cerebral blood volume (rCBVmax) and the maximum PS value (PSmax). We statistically examined the relationship between PSmax and prognosis using survival analysis, including other clinicopathologic factors (age, Karnofsky performance status [KPS], extent of resection, O6-methylguanine-DNA methyltransferase [MGMT] status, second-line use of bevacizumab, and rCBVmax). Log-rank tests revealed that age, KPS, MGMT status, and PSmax were significantly correlated with overall survival. Multivariate analysis using the Cox regression model showed that PSmax was the most significant prognostic factor. Receiver operating characteristic curve analysis showed that PSmax had the highest accuracy in differentiating longtime survivors (LTSs) (surviving more than 2 years) from non-LTSs. At a cutoff point of 8.26 mL/100 g/min, sensitivity and specificity were 90% and 70%, respectively. PSmax from PCT study can help predict survival time in patients with GBM receiving the standard treatment protocol. Survival may be related to sensitivity to TMZ. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Who is More Affected by Ozone Pollution? A Systematic Review and Meta-Analysis

    PubMed Central

    Bell, Michelle L.; Zanobetti, Antonella; Dominici, Francesca

    2014-01-01

    Ozone is associated with adverse health; however, less is known about vulnerable/sensitive populations, which we refer to as sensitive populations. We systematically reviewed epidemiologic evidence (1988–2013) regarding sensitivity to mortality or hospital admission from short-term ozone exposure. We performed meta-analysis for overall associations by age and sex; assessed publication bias; and qualitatively assessed sensitivity to socioeconomic indicators, race/ethnicity, and air conditioning. The search identified 2,091 unique papers, with 167 meeting inclusion criteria (73 on mortality and 96 on hospitalizations and emergency department visits, including 2 examining both mortality and hospitalizations). The strongest evidence for ozone sensitivity was for age. Per 10-parts per billion increase in daily 8-hour ozone concentration, mortality risk for younger persons, at 0.60% (95% confidence interval (CI): 0.40, 0.80), was statistically lower than that for older persons, at 1.27% (95% CI: 0.76, 1.78). Findings adjusted for publication bias were similar. Limited/suggestive evidence was found for higher associations among women; mortality risks were 0.39% (95% CI: −0.22, 1.00) higher than those for men. We identified strong evidence for higher associations with unemployment or lower occupational status and weak evidence of sensitivity for racial/ethnic minorities and persons with low education, in poverty, or without central air conditioning. Findings show that some populations, especially the elderly, are particularly sensitive to short-term ozone exposure. PMID:24872350

  12. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses

    PubMed Central

    Soares, Marta O.; Palmer, Stephen; Ades, Anthony E.; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M.

    2015-01-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. PMID:25712447

  13. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    PubMed

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  14. Validation of insulin sensitivity and secretion indices derived from the liquid meal tolerance test.

    PubMed

    Maki, Kevin C; Kelley, Kathleen M; Lawless, Andrea L; Hubacher, Rachel L; Schild, Arianne L; Dicklin, Mary R; Rains, Tia M

    2011-06-01

    A liquid meal tolerance test (LMTT) has been proposed as a useful alternative to more labor-intensive methods of assessing insulin sensitivity and secretion. This substudy, conducted at the conclusion of a randomized, double-blind crossover trial, compared insulin sensitivity indices from a LMTT (Matsuda insulin sensitivity index [MISI] and LMTT disposition index [LMTT-DI]) with indices derived from minimal model analysis of results from the insulin-modified intravenous glucose tolerance test (IVGTT) (insulin sensitivity index [S(I)] and disposition index [DI]). Participants included men (n = 16) and women (n = 8) without diabetes but with increased abdominal adiposity (waist circumference ≥102 cm and ≥89 cm, respectively) and mean age of 48.9 years. The correlation between S(I) and the MISI was 0.776 (P < 0.0001). The respective associations between S(I) and MISI with waist circumference (r = -0.445 and -0.554, both P < 0.05) and body mass index were similar (r = -0.500 and -0.539, P < 0.05). The correlation between DI and LMTT-DI was 0.604 (P = 0.002). These results indicate that indices of insulin sensitivity and secretion derived from the LMTT correlate well with those from the insulin-modified IVGTT with minimal model analysis, suggesting that they may be useful for application in clinical and population studies of glucose homeostasis.

  15. Sensitivity analysis of observed reflectivity to ice particle surface roughness using MISR satellite observations

    NASA Astrophysics Data System (ADS)

    Bell, A.; Hioki, S.; Wang, Y.; Yang, P.; Di Girolamo, L.

    2016-12-01

    Previous studies found that including ice particle surface roughness in forward light scattering calculations significantly reduces the differences between observed and simulated polarimetric and radiometric observations. While it is suggested that some degree of roughness is desirable, the appropriate degree of surface roughness to be assumed in operational cloud property retrievals and the sensitivity of retrieval products to this assumption remains uncertain. In an effort to extricate this ambiguity, we will present a sensitivity analysis of space-borne multi-angle observations of reflectivity, to varying degrees of surface roughness. This process is two fold. First, sampling information and statistics of Multi-angle Imaging SpectroRadiometer (MISR) sensor data aboard the Terra platform, will be used to define the most coming viewing observation geometries. Using these defined geometries, reflectivity will be simulated for multiple degrees of roughness using results from adding-doubling radiative transfer simulations. Sensitivity of simulated reflectivity to surface roughness can then be quantified, thus yielding a more robust retrieval system. Secondly, sensitivity of the inverse problem will be analyzed. Spherical albedo values will be computed by feeding blocks of MISR data comprising cloudy pixels over ocean into the retrieval system, with assumed values of surface roughness. The sensitivity of spherical albedo to the inclusion of surface roughness can then be quantified, and the accuracy of retrieved parameters can be determined.

  16. A new process sensitivity index to identify important system processes under process model and parametric uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Heng; Ye, Ming; Walker, Anthony P.

    Hydrological models are always composed of multiple components that represent processes key to intended model applications. When a process can be simulated by multiple conceptual-mathematical models (process models), model uncertainty in representing the process arises. While global sensitivity analysis methods have been widely used for identifying important processes in hydrologic modeling, the existing methods consider only parametric uncertainty but ignore the model uncertainty for process representation. To address this problem, this study develops a new method to probe multimodel process sensitivity by integrating the model averaging methods into the framework of variance-based global sensitivity analysis, given that the model averagingmore » methods quantify both parametric and model uncertainty. A new process sensitivity index is derived as a metric of relative process importance, and the index includes variance in model outputs caused by uncertainty in both process models and model parameters. For demonstration, the new index is used to evaluate the processes of recharge and geology in a synthetic study of groundwater reactive transport modeling. The recharge process is simulated by two models that converting precipitation to recharge, and the geology process is also simulated by two models of different parameterizations of hydraulic conductivity; each process model has its own random parameters. The new process sensitivity index is mathematically general, and can be applied to a wide range of problems in hydrology and beyond.« less

  17. Accuracy of combined dynamic contrast-enhanced magnetic resonance imaging and diffusion-weighted imaging for breast cancer detection: a meta-analysis.

    PubMed

    Zhang, Li; Tang, Min; Min, Zhiqian; Lu, Jun; Lei, Xiaoyan; Zhang, Xiaoling

    2016-06-01

    Magnetic resonance imaging (MRI) is increasingly being used to examine patients with suspected breast cancer. To determine the diagnostic performance of combined dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) and diffusion-weighted imaging (DWI) for breast cancer detection. A comprehensive search of the PUBMED, EMBASE, Web of Science, and Cochrane Library databases was performed up to September 2014. Statistical analysis included pooling of sensitivity and specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and diagnostic accuracy using the summary receiver operating characteristic (SROC). All analyses were conducted using STATA (version 12.0), RevMan (version 5.2), and Meta-Disc 1.4 software programs. Fourteen studies were analyzed, which included a total of 1140 patients with 1276 breast lesions. The pooled sensitivity and specificity of combined DCE-MRI and DWI were 91.6% and 85.5%, respectively. The pooled sensitivity and specificity of DWI-MRI were 86.0% and 75.6%, respectively. The pooled sensitivity and specificity of DCE-MRI were 93.2% and 71.1%. The area under the SROC curve (AUC-SROC) of combined DCE-MRI and DWI was 0.94, the DCE-MRI of 0.85. Deeks testing confirmed no significant publication bias in all studies. Combined DCE-MRI and DWI had superior diagnostic accuracy than either DCE-MRI or DWI alone for the diagnosis of breast cancer. © The Foundation Acta Radiologica 2015.

  18. Adult vector control, mosquito ecology and malaria transmission.

    PubMed

    Brady, Oliver J; Godfray, H Charles J; Tatem, Andrew J; Gething, Peter W; Cohen, Justin M; McKenzie, F Ellis; Alex Perkins, T; Reiner, Robert C; Tusting, Lucy S; Scott, Thomas W; Lindsay, Steven W; Hay, Simon I; Smith, David L

    2015-03-01

    Standard advice regarding vector control is to prefer interventions that reduce the lifespan of adult mosquitoes. The basis for this advice is a decades-old sensitivity analysis of 'vectorial capacity', a concept relevant for most malaria transmission models and based solely on adult mosquito population dynamics. Recent advances in micro-simulation models offer an opportunity to expand the theory of vectorial capacity to include both adult and juvenile mosquito stages in the model. In this study we revisit arguments about transmission and its sensitivity to mosquito bionomic parameters using an elasticity analysis of developed formulations of vectorial capacity. We show that reducing adult survival has effects on both adult and juvenile population size, which are significant for transmission and not accounted for in traditional formulations of vectorial capacity. The elasticity of these effects is dependent on various mosquito population parameters, which we explore. Overall, control is most sensitive to methods that affect adult mosquito mortality rates, followed by blood feeding frequency, human blood feeding habit, and lastly, to adult mosquito population density. These results emphasise more strongly than ever the sensitivity of transmission to adult mosquito mortality, but also suggest the high potential of combinations of interventions including larval source management. This must be done with caution, however, as policy requires a more careful consideration of costs, operational difficulties and policy goals in relation to baseline transmission. © The Author 2015. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene.

  19. Alkylation sensitivity screens reveal a conserved cross-species functionome

    PubMed Central

    Svilar, David; Dyavaiah, Madhu; Brown, Ashley R.; Tang, Jiang-bo; Li, Jianfeng; McDonald, Peter R.; Shun, Tong Ying; Braganza, Andrea; Wang, Xiao-hong; Maniar, Salony; St Croix, Claudette M.; Lazo, John S.; Pollack, Ian F.; Begley, Thomas J.; Sobol, Robert W.

    2013-01-01

    To identify genes that contribute to chemotherapy resistance in glioblastoma, we conducted a synthetic lethal screen in a chemotherapy-resistant glioblastoma derived cell line with the clinical alkylator temozolomide (TMZ) and an siRNA library tailored towards “druggable” targets. Select DNA repair genes in the screen were validated independently, confirming the DNA glycosylases UNG and MYH as well as MPG to be involved in the response to high dose TMZ. The involvement of UNG and MYH is likely the result of a TMZ-induced burst of reactive oxygen species. We then compared the human TMZ sensitizing genes identified in our screen with those previously identified from alkylator screens conducted in E. coli and S. cerevisiae. The conserved biological processes across all three species composes an Alkylation Functionome that includes many novel proteins not previously thought to impact alkylator resistance. This high-throughput screen, validation and cross-species analysis was then followed by a mechanistic analysis of two essential nodes: base excision repair (BER) DNA glycosylases (UNG, human and mag1, S. cerevisiae) and protein modification systems, including UBE3B and ICMT in human cells or pby1, lip22, stp22 and aim22 in S. cerevisiae. The conserved processes of BER and protein modification were dual targeted and yielded additive sensitization to alkylators in S. cerevisiae. In contrast, dual targeting of BER and protein modification genes in human cells did not increase sensitivity, suggesting an epistatic relationship. Importantly, these studies provide potential new targets to overcome alkylating agent resistance. PMID:23038810

  20. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  1. Differences in sensitivity to parenting depending on child temperament: A meta-analysis.

    PubMed

    Slagt, Meike; Dubas, Judith Semon; Deković, Maja; van Aken, Marcel A G

    2016-10-01

    Several models of individual differences in environmental sensitivity postulate increased sensitivity of some individuals to either stressful (diathesis-stress), supportive (vantage sensitivity), or both environments (differential susceptibility). In this meta-analysis we examine whether children vary in sensitivity to parenting depending on their temperament, and if so, which model can best be used to describe this sensitivity pattern. We tested whether associations between negative parenting and negative or positive child adjustment as well as between positive parenting and positive or negative child adjustment would be stronger among children higher on putative sensitivity markers (difficult temperament, negative emotionality, surgency, and effortful control). Longitudinal studies with children up to 18 years (k = 105 samples from 84 studies, Nmean = 6,153) that reported on a parenting-by-temperament interaction predicting child adjustment were included. We found 235 independent effect sizes for associations between parenting and child adjustment. Results showed that children with a more difficult temperament (compared with those with a more easy temperament) were more vulnerable to negative parenting, but also profited more from positive parenting, supporting the differential susceptibility model. Differences in susceptibility were expressed in externalizing and internalizing problems and in social and cognitive competence. Support for differential susceptibility for negative emotionality was, however, only present when this trait was assessed during infancy. Surgency and effortful control did not consistently moderate associations between parenting and child adjustment, providing little support for differential susceptibility, diathesis-stress, or vantage sensitivity models. Finally, parenting-by-temperament interactions were more pronounced when parenting was assessed using observations compared to questionnaires. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Small-volume cavity cell using hollow optical fiber for Raman scattering-based gas detection

    NASA Astrophysics Data System (ADS)

    Okita, Y.; Katagiri, T.; Matsuura, Y.

    2011-03-01

    The highly sensitive Raman cell based on the hollow optical fiber that is suitable for the real-time breath analysis is reported. Hollow optical fiber with inner coating of silver is used as a gas cell and a Stokes light collector. A very small cell whose volume is only 0.4 ml or less enables fast response and real-time measurement of trace gases. To increase the sensitivity the cell is arranged in a cavity which includes of a long-pass filter and a high reflective mirror. The sensitivity of the cavity cell is more than two times higher than that of the cell without cavity.

  3. Smart Voyage Planning Model Sensitivity Analysis Using Ocean and Atmospheric Models Including Ensemble Methods

    DTIC Science & Technology

    2012-09-01

    LEO Low Earth...vapor, infrared, and visible satellite imagery (Geostationary, MODIS, AVHRR, and LEO /GEO) (UCAR 2012). 20 The physics package includes: • Bulk...8217] legend([artist[color] for color in ’r’,’b’,’g’,’c’,’m’,’teal’], label, loc =’best’, shadow=True, fancybox=True)

  4. A lower and more constrained estimate of climate sensitivity using updated observations and detailed radiative forcing time series

    NASA Astrophysics Data System (ADS)

    Skeie, R. B.; Berntsen, T.; Aldrin, M.; Holden, M.; Myhre, G.

    2012-04-01

    A key question in climate science is to quantify the sensitivity of the climate system to perturbation in the radiative forcing (RF). This sensitivity is often represented by the equilibrium climate sensitivity, but this quantity is poorly constrained with significant probabilities for high values. In this work the equilibrium climate sensitivity (ECS) is estimated based on observed near-surface temperature change from the instrumental record, changes in ocean heat content and detailed RF time series. RF time series from pre-industrial times to 2010 for all main anthropogenic and natural forcing mechanisms are estimated and the cloud lifetime effect and the semi-direct effect, which are not RF mechanisms in a strict sense, are included in the analysis. The RF time series are linked to the observations of ocean heat content and temperature change through an energy balance model and a stochastic model, using a Bayesian approach to estimate the ECS from the data. The posterior mean of the ECS is 1.9˚C with 90% credible interval (C.I.) ranging from 1.2 to 2.9˚C, which is tighter than previously published estimates. Observational data up to and including year 2010 are used in this study. This is at least ten additional years compared to the majority of previously published studies that have used the instrumental record in attempts to constrain the ECS. We show that the additional 10 years of data, and especially 10 years of additional ocean heat content data, have significantly narrowed the probability density function of the ECS. If only data up to and including year 2000 are used in the analysis, the 90% C.I. is 1.4 to 10.6˚C with a pronounced heavy tail in line with previous estimates of ECS constrained by observations in the 20th century. Also the transient climate response (TCR) is estimated in this study. Using observational data up to and including year 2010 gives a 90% C.I. of 1.0 to 2.1˚C, while the 90% C.I. is significantly broader ranging from 1.1 to 3.4 ˚C if only data up to and including year 2000 is used.

  5. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Guoping; Mayes, Melanie; Parker, Jack C

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) couldmore » be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.« less

  6. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  7. Switch of Sensitivity Dynamics Revealed with DyGloSA Toolbox for Dynamical Global Sensitivity Analysis as an Early Warning for System's Critical Transition

    PubMed Central

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA – a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits. PMID:24367574

  8. Switch of sensitivity dynamics revealed with DyGloSA toolbox for dynamical global sensitivity analysis as an early warning for system's critical transition.

    PubMed

    Baumuratova, Tatiana; Dobre, Simona; Bastogne, Thierry; Sauter, Thomas

    2013-01-01

    Systems with bifurcations may experience abrupt irreversible and often unwanted shifts in their performance, called critical transitions. For many systems like climate, economy, ecosystems it is highly desirable to identify indicators serving as early warnings of such regime shifts. Several statistical measures were recently proposed as early warnings of critical transitions including increased variance, autocorrelation and skewness of experimental or model-generated data. The lack of automatized tool for model-based prediction of critical transitions led to designing DyGloSA - a MATLAB toolbox for dynamical global parameter sensitivity analysis (GPSA) of ordinary differential equations models. We suggest that the switch in dynamics of parameter sensitivities revealed by our toolbox is an early warning that a system is approaching a critical transition. We illustrate the efficiency of our toolbox by analyzing several models with bifurcations and predicting the time periods when systems can still avoid going to a critical transition by manipulating certain parameter values, which is not detectable with the existing SA techniques. DyGloSA is based on the SBToolbox2 and contains functions, which compute dynamically the global sensitivity indices of the system by applying four main GPSA methods: eFAST, Sobol's ANOVA, PRCC and WALS. It includes parallelized versions of the functions enabling significant reduction of the computational time (up to 12 times). DyGloSA is freely available as a set of MATLAB scripts at http://bio.uni.lu/systems_biology/software/dyglosa. It requires installation of MATLAB (versions R2008b or later) and the Systems Biology Toolbox2 available at www.sbtoolbox2.org. DyGloSA can be run on Windows and Linux systems, -32 and -64 bits.

  9. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  10. Quantitative performance targets by using balanced scorecard system: application to waste management and public administration.

    PubMed

    Mendes, Paula; Nunes, Luis Miguel; Teixeira, Margarida Ribau

    2014-09-01

    This article demonstrates how decision-makers can be guided in the process of defining performance target values in the balanced scorecard system. We apply a method based on sensitivity analysis with Monte Carlo simulation to the municipal solid waste management system in Loulé Municipality (Portugal). The method includes two steps: sensitivity analysis of performance indicators to identify those performance indicators with the highest impact on the balanced scorecard model outcomes; and sensitivity analysis of the target values for the previously identified performance indicators. Sensitivity analysis shows that four strategic objectives (IPP1: Comply with the national waste strategy; IPP4: Reduce nonrenewable resources and greenhouse gases; IPP5: Optimize the life-cycle of waste; and FP1: Meet and optimize the budget) alone contribute 99.7% of the variability in overall balanced scorecard value. Thus, these strategic objectives had a much stronger impact on the estimated balanced scorecard outcome than did others, with the IPP1 and the IPP4 accounting for over 55% and 22% of the variance in overall balanced scorecard value, respectively. The remaining performance indicators contribute only marginally. In addition, a change in the value of a single indicator's target value made the overall balanced scorecard value change by as much as 18%. This may lead to involuntarily biased decisions by organizations regarding performance target-setting, if not prevented with the help of methods such as that proposed and applied in this study. © The Author(s) 2014.

  11. Value of high-sensitivity C-reactive protein assays in predicting atrial fibrillation recurrence: a systematic review and meta-analysis

    PubMed Central

    Yo, Chia-Hung; Lee, Si-Huei; Chang, Shy-Shin; Lee, Matthew Chien-Hung; Lee, Chien-Chang

    2014-01-01

    Objectives We performed a systematic review and meta-analysis of studies on high-sensitivity C-reactive protein (hs-CRP) assays to see whether these tests are predictive of atrial fibrillation (AF) recurrence after cardioversion. Design Systematic review and meta-analysis. Data sources PubMed, EMBASE and Cochrane databases as well as a hand search of the reference lists in the retrieved articles from inception to December 2013. Study eligibility criteria This review selected observational studies in which the measurements of serum CRP were used to predict AF recurrence. An hs-CRP assay was defined as any CRP test capable of measuring serum CRP to below 0.6 mg/dL. Primary and secondary outcome measures We summarised test performance characteristics with the use of forest plots, hierarchical summary receiver operating characteristic curves and bivariate random effects models. Meta-regression analysis was performed to explore the source of heterogeneity. Results We included nine qualifying studies comprising a total of 347 patients with AF recurrence and 335 controls. A CRP level higher than the optimal cut-off point was an independent predictor of AF recurrence after cardioversion (summary adjusted OR: 3.33; 95% CI 2.10 to 5.28). The estimated pooled sensitivity and specificity for hs-CRP was 71.0% (95% CI 63% to 78%) and 72.0% (61% to 81%), respectively. Most studies used a CRP cut-off point of 1.9 mg/L to predict long-term AF recurrence (77% sensitivity, 65% specificity), and 3 mg/L to predict short-term AF recurrence (73% sensitivity, 71% specificity). Conclusions hs-CRP assays are moderately accurate in predicting AF recurrence after successful cardioversion. PMID:24556243

  12. Diagnostic accuracy of magnetic resonance imaging techniques for treatment response evaluation in patients with high-grade glioma, a systematic review and meta-analysis.

    PubMed

    van Dijken, Bart R J; van Laar, Peter Jan; Holtman, Gea A; van der Hoorn, Anouk

    2017-10-01

    Treatment response assessment in high-grade gliomas uses contrast enhanced T1-weighted MRI, but is unreliable. Novel advanced MRI techniques have been studied, but the accuracy is not well known. Therefore, we performed a systematic meta-analysis to assess the diagnostic accuracy of anatomical and advanced MRI for treatment response in high-grade gliomas. Databases were searched systematically. Study selection and data extraction were done by two authors independently. Meta-analysis was performed using a bivariate random effects model when ≥5 studies were included. Anatomical MRI (five studies, 166 patients) showed a pooled sensitivity and specificity of 68% (95%CI 51-81) and 77% (45-93), respectively. Pooled apparent diffusion coefficients (seven studies, 204 patients) demonstrated a sensitivity of 71% (60-80) and specificity of 87% (77-93). DSC-perfusion (18 studies, 708 patients) sensitivity was 87% (82-91) with a specificity of 86% (77-91). DCE-perfusion (five studies, 207 patients) sensitivity was 92% (73-98) and specificity was 85% (76-92). The sensitivity of spectroscopy (nine studies, 203 patients) was 91% (79-97) and specificity was 95% (65-99). Advanced techniques showed higher diagnostic accuracy than anatomical MRI, the highest for spectroscopy, supporting the use in treatment response assessment in high-grade gliomas. • Treatment response assessment in high-grade gliomas with anatomical MRI is unreliable • Novel advanced MRI techniques have been studied, but diagnostic accuracy is unknown • Meta-analysis demonstrates that advanced MRI showed higher diagnostic accuracy than anatomical MRI • Highest diagnostic accuracy for spectroscopy and perfusion MRI • Supports the incorporation of advanced MRI in high-grade glioma treatment response assessment.

  13. 75 FR 44216 - Cybersecurity, Innovation and the Internet Economy

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-28

    ..., intellectual property, business advocacy and export control. This Notice of Inquiry is one in a series of... proprietary and sensitive business, transactional, and personal data. At the same time that businesses and... area include research and training, threat reporting and analysis, information collection and...

  14. Novel applications of lasers in biology, chemistry, and paleontology

    NASA Astrophysics Data System (ADS)

    Johnston, Roger G.

    1994-06-01

    Los Alamos National Laboratory has a long history of exploring unconventional applications for lasers. Three novel applications currently under investigation include using lasers for the analysis of dinosaur gastroliths, for detecting Salmonella contamination in chicken eggs, and for ultra- sensitive, ultra-stable interferometry.

  15. Spectrometer gun

    DOEpatents

    Waechter, David A.; Wolf, Michael A.; Umbarger, C. John

    1985-01-01

    A hand-holdable, battery-operated, microprocessor-based spectrometer gun includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.

  16. Moment-based metrics for global sensitivity analysis of hydrological systems

    NASA Astrophysics Data System (ADS)

    Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto

    2017-12-01

    We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

  17. Analysis of sensitivity to different parameterization schemes for a subtropical cyclone

    NASA Astrophysics Data System (ADS)

    Quitián-Hernández, L.; Fernández-González, S.; González-Alemán, J. J.; Valero, F.; Martín, M. L.

    2018-05-01

    A sensitivity analysis to diverse WRF model physical parameterization schemes is carried out during the lifecycle of a Subtropical cyclone (STC). STCs are low-pressure systems that share tropical and extratropical characteristics, with hybrid thermal structures. In October 2014, a STC made landfall in the Canary Islands, causing widespread damage from strong winds and precipitation there. The system began to develop on October 18 and its effects lasted until October 21. Accurate simulation of this type of cyclone continues to be a major challenge because of its rapid intensification and unique characteristics. In the present study, several numerical simulations were performed using the WRF model to do a sensitivity analysis of its various parameterization schemes for the development and intensification of the STC. The combination of parameterization schemes that best simulated this type of phenomenon was thereby determined. In particular, the parameterization combinations that included the Tiedtke cumulus schemes had the most positive effects on model results. Moreover, concerning STC track validation, optimal results were attained when the STC was fully formed and all convective processes stabilized. Furthermore, to obtain the parameterization schemes that optimally categorize STC structure, a verification using Cyclone Phase Space is assessed. Consequently, the combination of parameterizations including the Tiedtke cumulus schemes were again the best in categorizing the cyclone's subtropical structure. For strength validation, related atmospheric variables such as wind speed and precipitable water were analyzed. Finally, the effects of using a deterministic or probabilistic approach in simulating intense convective phenomena were evaluated.

  18. Key Reliability Drivers of Liquid Propulsion Engines and A Reliability Model for Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Zhao-Feng; Fint, Jeffry A.; Kuck, Frederick M.

    2005-01-01

    This paper is to address the in-flight reliability of a liquid propulsion engine system for a launch vehicle. We first establish a comprehensive list of system and sub-system reliability drivers for any liquid propulsion engine system. We then build a reliability model to parametrically analyze the impact of some reliability parameters. We present sensitivity analysis results for a selected subset of the key reliability drivers using the model. Reliability drivers identified include: number of engines for the liquid propulsion stage, single engine total reliability, engine operation duration, engine thrust size, reusability, engine de-rating or up-rating, engine-out design (including engine-out switching reliability, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction), propellant specific hazards, engine start and cutoff transient hazards, engine combustion cycles, vehicle and engine interface and interaction hazards, engine health management system, engine modification, engine ground start hold down with launch commit criteria, engine altitude start (1 in. start), Multiple altitude restart (less than 1 restart), component, subsystem and system design, manufacturing/ground operation support/pre and post flight check outs and inspection, extensiveness of the development program. We present some sensitivity analysis results for the following subset of the drivers: number of engines for the propulsion stage, single engine total reliability, engine operation duration, engine de-rating or up-rating requirements, engine-out design, catastrophic fraction, preventable failure fraction, unnecessary shutdown fraction, and engine health management system implementation (basic redlines and more advanced health management systems).

  19. A diagnostic model for the detection of sensitization to wheat allergens was developed and validated in bakery workers.

    PubMed

    Suarthana, Eva; Vergouwe, Yvonne; Moons, Karel G; de Monchy, Jan; Grobbee, Diederick; Heederik, Dick; Meijer, Evert

    2010-09-01

    To develop and validate a prediction model to detect sensitization to wheat allergens in bakery workers. The prediction model was developed in 867 Dutch bakery workers (development set, prevalence of sensitization 13%) and included questionnaire items (candidate predictors). First, principal component analysis was used to reduce the number of candidate predictors. Then, multivariable logistic regression analysis was used to develop the model. Internal validation and extent of optimism was assessed with bootstrapping. External validation was studied in 390 independent Dutch bakery workers (validation set, prevalence of sensitization 20%). The prediction model contained the predictors nasoconjunctival symptoms, asthma symptoms, shortness of breath and wheeze, work-related upper and lower respiratory symptoms, and traditional bakery. The model showed good discrimination with an area under the receiver operating characteristic (ROC) curve area of 0.76 (and 0.75 after internal validation). Application of the model in the validation set gave a reasonable discrimination (ROC area=0.69) and good calibration after a small adjustment of the model intercept. A simple model with questionnaire items only can be used to stratify bakers according to their risk of sensitization to wheat allergens. Its use may increase the cost-effectiveness of (subsequent) medical surveillance.

  20. Diagnostic value of secreted frizzled-related protein 2 gene promoter hypermethylation in stool for colorectal cancer: A meta-analysis.

    PubMed

    Zhou, Zhiran; Zhang, Huitian; Lei, Yunxia

    2016-10-01

    To evaluate the diagnostic value of secreted frizzled-related protein 2 (SFRP2) gene promoter hypermethylation in stool for colorectal cancer (CRC). Open published diagnostic study of SFRP2 gene promoter hypermethylation in stool for CRC detection was electronic searched in the databases of PubMed, EMBASE, Cochrane Library, Web of Science, and China National Knowledge Infrastructure. The data of true positive, false positive false negative, and true negative identified by stool SFRP2 gene hypermethylation was extracted and pooled for diagnostic sensitivity, specificity, and summary receiver operating characteristic (SROC) curve. According to the inclusion and exclusion criteria, we finally included nine publications with 792 cases in the meta-analysis. Thus, the diagnostic sensitivity was aggregated through random effect model. The pooled sensitivity was 0.82 with the corresponding 95% confidence interval (95% CI) of 0.79-0.85; the pooled specificity and its corresponding 95% CI were 0.47 and 0.40-0.53 by the random effect model; we pooled the SROC curve by sensitivity versus specificity according to data published in the nine studies. The area under the SROC curve was 0.70 (95% CI: 0.65-0.73). SFRP2 gene promoter hypermethylation in stool can was a potential biomarker for CRC diagnosis with relative high sensitivity.

  1. An Evaluation of Transplacental Carcinogenesis for Human ...

    EPA Pesticide Factsheets

    Risk assessments take into account the sensitivity of the postnatal period to carcinogens through the application of age-dependent adjustment factors (ADAFs) (Barton et al. 2005). The prenatal period is also recognized to be sensitive but is typically not included into risk assessments (NRC, 2009). An analysis by California OEHHA (2008) contrasted prenatal, postnatal and adult sensitivity to 23 different carcinogens across 37 studies. That analysis found a wide range of transplacental sensitivity with some agents nearly 100 fold more potent in utero than in adults while others had an in utero/adult ratio adult only exposure). Five carcinogens had more modest ratios to adult potency in both pre- and postnatal testing (vinyl chloride, ethylnitroso biuret, 3-methylcholanthrene, urethane, diethylnitrosamine, 3-10 fold). Only one chemical showed a pre- vs postnatal divergence (butylnitrosourea, prenataladult). Based upon this limited set of genotoxic carcinogens, it appears that the prenatal period often has a sensitivity that approximates what has been found for postnatal, and the maternal system does not offer substantial protection against transplacental carcinogenesis in most cases. This suggests that the system of ADAFs developed for postnatal exposure may be considered for prenatal exposures as well. An alternative approach may be to calculate cancer risk for the period of pregnancy rather than blend this risk into the calculation of lifetime risk. This

  2. Sensitivity analysis for the total nitrogen pollution of the Danjiangkou Reservoir based on a 3-D water quality model

    NASA Astrophysics Data System (ADS)

    Chen, Libin; Yang, Zhifeng; Liu, Haifei

    2017-12-01

    Inter-basin water transfers containing a great deal of nitrogen are great threats to human health, biodiversity, and air and water quality in the recipient area. Danjiangkou Reservoir, the source reservoir for China's South-to-North Water Diversion Middle Route Project, suffers from total nitrogen pollution and threatens the water transfer to a number of metropolises including the capital, Beijing. To locate the main source of nitrogen pollution into the reservoir, especially near the Taocha canal head, where the intake of water transfer begins, we constructed a 3-D water quality model. We then used an inflow sensitivity analysis method to analyze the significance of inflows from each tributary that may contribute to the total nitrogen pollution and affect water quality. The results indicated that the Han River was the most significant river with a sensitivity index of 0.340, followed by the Dan River with a sensitivity index of 0.089, while the Guanshan River and the Lang River were not significant, with the sensitivity indices of 0.002 and 0.001, respectively. This result implies that the concentration and amount of nitrogen inflow outweighs the geographical position of the tributary for sources of total nitrogen pollution to the Taocha canal head of the Danjiangkou Reservoir.

  3. Real-time PCR and NASBA for rapid and sensitive detection of Vibrio cholerae in ballast water.

    PubMed

    Fykse, Else M; Nilsen, Trine; Nielsen, Agnete Dessen; Tryland, Ingun; Delacroix, Stephanie; Blatny, Janet M

    2012-02-01

    Transport of ballast water is one major factor in the transmission of aquatic organisms, including pathogenic bacteria. The IMO-guidelines of the Convention for the Control and Management of Ships' Ballast Water and Sediments, states that ships are to discharge <1 CFU per 100 ml ballast water of toxigenic Vibrio cholerae, emphasizing the need to establish test methods. To our knowledge, there are no methods sensitive and rapid enough available for cholera surveillance of ballast water. In this study real-time PCR and NASBA methods have been evaluated to specifically detect 1 CFU/100ml of V. cholerae in ballast water. Ballast water samples spiked with V. cholerae cells were filtered and enriched in alkaline peptone water before PCR or NASBA detection. The entire method, including sample preparation and analysis was performed within 7 h, and has the potential to be used for analysis of ballast water for inspection and enforcement control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.

    Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less

  5. User Manual for Whisper-1.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.; Rising, Michael Evan; Alwin, Jennifer Louise

    2017-01-26

    Whisper is a statistical analysis package developed in 2014 to support nuclear criticality safety (NCS) validation [1-3]. It uses the sensitivity profile data for an application as computed by MCNP6 [4-6] along with covariance files [7,8] for the nuclear data to determine a baseline upper-subcritical-limit (USL) for the application. Whisper version 1.0 was first developed and used at LANL in 2014 [3]. During 2015- 2016, Whisper was updated to version 1.1 and is to be included with the upcoming release of MCNP6.2. This document describes the user input and options for running whisper-1.1, including 2 perl utility scripts that simplifymore » ordinary NCS work, whisper_mcnp.pl and whisper_usl.pl. For many detailed references on the theory, applications, nuclear data & covariances, SQA, verification-validation, adjointbased methods for sensitivity-uncertainty analysis, and more – see the Whisper – NCS Validation section of the MCNP Reference Collection at mcnp.lanl.gov. There are currently over 50 Whisper reference documents available.« less

  6. Sensitivity analysis for best-estimate thermal models of vertical dry cask storage systems

    DOE PAGES

    DeVoe, Remy R.; Robb, Kevin R.; Skutnik, Steven E.

    2017-07-08

    Loading requirements for dry cask storage of spent nuclear fuel are driven primarily by decay heat capacity limitations, which themselves are determined through recommended limits on peak cladding temperature within the cask. This study examines the relative sensitivity of peak material temperatures within the cask to parameters that influence both the stored fuel residual decay heat as well as heat removal mechanisms. Here, these parameters include the detailed reactor operating history parameters (e.g., soluble boron concentrations and the presence of burnable poisons) as well as factors that influence heat removal, including non-dominant processes (such as conduction from the fuel basketmore » to the canister and radiation within the canister) and ambient environmental conditions. By examining the factors that drive heat removal from the cask alongside well-understood factors that drive decay heat, it is therefore possible to make a contextual analysis of the most important parameters to evaluation of peak material temperatures within the cask.« less

  7. GIS least-cost analysis approach for siting gas pipeline ROWs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sydelko, P.J.; Wilkey, P.L.

    1994-09-01

    Geographic-information-system applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation corridors, endangered species habitats, wetlands, and public line surveys. A geographic information system was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas-pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less

  8. Critical factors determining the quantification capability of matrix-assisted laser desorption/ionization– time-of-flight mass spectrometry

    PubMed Central

    Wang, Chia-Chen; Lai, Yin-Hung; Ou, Yu-Meng; Chang, Huan-Tsung; Wang, Yi-Sheng

    2016-01-01

    Quantitative analysis with mass spectrometry (MS) is important but challenging. Matrix-assisted laser desorption/ionization (MALDI) coupled with time-of-flight (TOF) MS offers superior sensitivity, resolution and speed, but such techniques have numerous disadvantages that hinder quantitative analyses. This review summarizes essential obstacles to analyte quantification with MALDI-TOF MS, including the complex ionization mechanism of MALDI, sensitive characteristics of the applied electric fields and the mass-dependent detection efficiency of ion detectors. General quantitative ionization and desorption interpretations of ion production are described. Important instrument parameters and available methods of MALDI-TOF MS used for quantitative analysis are also reviewed. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644968

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gorham, P. W.; Allison, P.; DuVernois, M.

    The Antarctic Impulsive Transient Antenna (ANITA) completed its second Long Duration Balloon flight in January 2009, with 31 days aloft (28.5 live days) over Antarctica. ANITA searches for impulsive coherent radio Cherenkov emission from 200 to 1200 MHz, arising from the Askaryan charge excess in ultrahigh energy neutrino-induced cascades within Antarctic ice. This flight included significant improvements over the first flight in payload sensitivity, efficiency, and flight trajectory. Analysis of in-flight calibration pulses from surface and subsurface locations verifies the expected sensitivity. In a blind analysis, we find 2 surviving events on a background, mostly anthropogenic, of 0.97{+-}0.42 events. Wemore » set the strongest limit to date for 10{sup 18}-10{sup 21} eV cosmic neutrinos, excluding several current cosmogenic neutrino models.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chinthavali, Madhu Sudhan; Wang, Zhiqiang

    This paper presents a detailed parametric sensitivity analysis for a wireless power transfer (WPT) system in electric vehicle application. Specifically, several key parameters for sensitivity analysis of a series-parallel (SP) WPT system are derived first based on analytical modeling approach, which includes the equivalent input impedance, active / reactive power, and DC voltage gain. Based on the derivation, the impact of primary side compensation capacitance, coupling coefficient, transformer leakage inductance, and different load conditions on the DC voltage gain curve and power curve are studied and analyzed. It is shown that the desired power can be achieved by just changingmore » frequency or voltage depending on the design value of coupling coefficient. However, in some cases both have to be modified in order to achieve the required power transfer.« less

  11. Association of high sensitivity C-reactive protein and abdominal aortic aneurysm: a meta-analysis and systematic review.

    PubMed

    Wang, Yunpeng; Shen, Guanghui; Wang, Haiyang; Yao, Ye; Sun, Qingfeng; Jing, Bao; Liu, Gaoyan; Wu, Jia; Yuan, Chao; Liu, Siqi; Liu, Xinyu; Li, Shiyong; Li, Haocheng

    2017-12-01

    To evaluate the association of high sensitivity C-reactive protein (hsCRP) with the presence of abdominal aortic aneurysm (AAA). Medline, Cochrane, Embase, and Google Scholar databases were searched until 22 June 2016 using the keywords predictive factors, biomarkers, abdominal aortic aneurysm, prediction, high sensitivity C-reactive protein, and hsCRP. Prospective studies, retrospective studies, and cohort studies were included. Twelve case-control studies were included in the meta-analysis with a total of 8345 patients (1977 in the AAA group and 6368 in the control group). The pooled results showed that AAA patients had higher hsCRP value than the control group (difference in means = 1.827, 95% CI = 0.010 to 3.645, p = .049). Subgroup analysis found AAA patients with medium or small aortic diameter (<50 mm) had higher hsCRP plasma levels than the control group (difference in means = 1.301, 95% CI = 0.821 to 1.781, p < .001). In patients with large aortic diameter (≥50 mm), no difference was observed in hsCRP levels between the AAA and control groups (difference in means = 1.769, 95% CI = -1.387 to 4.925, p = .272). Multi-regression analysis found the difference in means of hsCRP plasma levels between AAA and control groups decreased as aortic diameter increased (slope = -0.04, p < .001), suggesting that hsCRP levels may be inversely associated with increasing aneurysm size. Our findings suggest that hsCRP levels may possibly be used as a diagnostic biomarker for AAA patients with medium or small aortic diameter but not for AAA patients with large aortic diameter. The correlation between serum hsCRP level and AAA aneurysm is not conclusive due to the small number of included articles and between-study heterogeneity.

  12. Relationship between a GABAA alpha 6 Pro385Ser substitution and benzodiazepine sensitivity.

    PubMed

    Iwata, N; Cowley, D S; Radel, M; Roy-Byrne, P P; Goldman, D

    1999-09-01

    In humans, interindividual variation in sensitivity to benzodiazepine drugs may correlate with behavioral variation, including vulnerability to disease states such as alcoholism. In the rat, variation in alcohol and benzodiazepine sensitivity has been correlated with an inherited variant of the GABAA alpha 6 receptor. The authors detected a Pro385Ser [1236C > T] amino acid substitution in the human GABAA alpha 6 that may influence alcohol sensitivity. In this pilot study, they evaluated the contribution of this polymorphism to benzodiazepine sensitivity. Sensitivity to diazepam was assessed in 51 children of alcoholics by using two eye movement measures: peak saccadic velocity and average smooth pursuit gain. Association analysis was performed with saccadic velocity and smooth pursuit gain as dependent variables and comparing Pro385/Ser385 heterozygotes and Pro385/Pro385 homozygotes. The Pro385Ser genotype was associated with less diazepam-induced impairment of saccadic velocity but not with smooth pursuit gain. The Pro385Ser genotype may play a role in benzodiazepine sensitivity and conditions, such as alcoholism, that may be correlated with this trait.

  13. Sensitivity of lod scores to changes in diagnostic status.

    PubMed Central

    Hodge, S E; Greenberg, D A

    1992-01-01

    This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis. PMID:1570835

  14. Sensitivity of lod scores to changes in diagnostic status.

    PubMed

    Hodge, S E; Greenberg, D A

    1992-05-01

    This paper investigates effects on lod scores when one individual in a data set changes diagnostic or recombinant status. First we examine the situation in which a single offspring in a nuclear family changes status. The nuclear-family situation, in addition to being of interest in its own right, also has general theoretical importance, since nuclear families are "transparent"; that is, one can track genetic events more precisely in nuclear families than in complex pedigrees. We demonstrate that in nuclear families log10 [(1-theta)/theta] gives an upper limit on the impact that a single offspring's change in status can have on the lod score at that recombination fraction (theta). These limits hold for a fully penetrant dominant condition and fully informative marker, in either phase-known or phase-unknown matings. Moreover, log10 [(1-theta)/theta] (where theta denotes the value of theta at which Zmax occurs) gives an upper limit on the impact of a single offspring's status change on the maximum lod score (Zmax). In extended pedigrees, in contrast to nuclear families, no comparable limit can be set on the impact of a single individual on the lod score. Complex pedigrees are subject to both stabilizing and destabilizing influences, and these are described. Finally, we describe a "sensitivity analysis," in which, after all linkage analysis is completed, every informative individual in the data set is changed, one at a time, to see the effect which each separate change has on the lod scores. The procedure includes identifying "critical individuals," i.e., those who would have the greatest impact on the lod scores, should their diagnostic status in fact change. To illustrate use of the sensitivity analysis, we apply it to the large bipolar pedigree reported by Egeland et al. and Kelsoe et al. We show that the changes in lod scores observed there, on the order of 1.1-1.2 per person, are not unusual. We recommend that investigators include a sensitivity analysis as a standard part of reporting the results of a linkage analysis.

  15. Spectral negentropy based sidebands and demodulation analysis for planet bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Feng, Zhipeng; Ma, Haoqun; Zuo, Ming J.

    2017-12-01

    Planet bearing vibration signals are highly complex due to intricate kinematics (involving both revolution and spinning) and strong multiple modulations (including not only the fault induced amplitude modulation and frequency modulation, but also additional amplitude modulations due to load zone passing, time-varying vibration transfer path, and time-varying angle between the gear pair mesh lines of action and fault impact force vector), leading to difficulty in fault feature extraction. Rolling element bearing fault diagnosis essentially relies on detection of fault induced repetitive impulses carried by resonance vibration, but they are usually contaminated by noise and therefor are hard to be detected. This further adds complexity to planet bearing diagnostics. Spectral negentropy is able to reveal the frequency distribution of repetitive transients, thus providing an approach to identify the optimal frequency band of a filter for separating repetitive impulses. In this paper, we find the informative frequency band (including the center frequency and bandwidth) of bearing fault induced repetitive impulses using the spectral negentropy based infogram. In Fourier spectrum, we identify planet bearing faults according to sideband characteristics around the center frequency. For demodulation analysis, we filter out the sensitive component based on the informative frequency band revealed by the infogram. In amplitude demodulated spectrum (squared envelope spectrum) of the sensitive component, we diagnose planet bearing faults by matching the present peaks with the theoretical fault characteristic frequencies. We further decompose the sensitive component into mono-component intrinsic mode functions (IMFs) to estimate their instantaneous frequencies, and select a sensitive IMF with an instantaneous frequency fluctuating around the center frequency for frequency demodulation analysis. In the frequency demodulated spectrum (Fourier spectrum of instantaneous frequency) of selected IMF, we discern planet bearing fault reasons according to the present peaks. The proposed spectral negentropy infogram based spectrum and demodulation analysis method is illustrated via a numerical simulated signal analysis. Considering the unique load bearing feature of planet bearings, experimental validations under both no-load and loading conditions are done to verify the derived fault symptoms and the proposed method. The localized faults on outer race, rolling element and inner race are successfully diagnosed.

  16. Updated Chemical Kinetics and Sensitivity Analysis Code

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, Krishnan

    2005-01-01

    An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.

  17. Helping teachers conduct sex education in secondary schools in Thailand: overcoming culturally sensitive barriers to sex education.

    PubMed

    Thammaraksa, Pimrat; Powwattana, Arpaporn; Lagampan, Sunee; Thaingtham, Weena

    2014-06-01

    The purpose of this quasi experimental study was to evaluate the effects of Culturally Sensitive Sex Education Skill Development, a teacher-led sex education program in secondary schools in Thailand. Two public secondary schools in the suburban areas of Bangkok were randomly selected. One was designated as the experimental school and the other as the comparison school. Ninety grade seven and eight teachers, 45 from each school, were selected to participate in the study. Self efficacy theory and culturally appropriate basis were applied to develop the program which included 4 weeks of intervention and 2 weeks of follow up. Primary outcomes were attitudes toward sex education, perceived self efficacy, and sex education skills. Statistical analysis included independent and paired t test, and repeated one-way analysis of variance. At the end of the intervention and during the follow-up period, the intervention group had significantly higher mean scores of attitudes toward sex education, perceived self efficacy, and sex education skills than their scores before (p < .001), and than those of the comparison group (p < .001). The results showed that Culturally Sensitive Sex Education Skill Development could enhance attitudes and sex education self efficacy to promote the implementation of sex education among teachers. Copyright © 2014. Published by Elsevier B.V.

  18. CAQI Common Air Quality Index--update with PM(2.5) and sensitivity analysis.

    PubMed

    van den Elshout, Sef; Léger, Karine; Heich, Hermann

    2014-08-01

    The CAQI or Common Air Quality Index was proposed to facilitate the comparison of air quality in European cities in real-time. There are many air quality indices in use in the world. All are somewhat different in concept and presentation and comparing air quality presentations of cities on the internet was virtually impossible. The CAQI and the accompanying website www.airqualitynow.eu and app were proposed to overcome this problem in Europe. This paper describes the logic of making an index, in particular the CAQI and its update with a grid for PM2.5. To assure a smooth transition to the new calculation scheme we studied the behaviour of the index before and after the changes. We used 2006 Airbase data from 31 urban background and 27 street stations all across Europe (that were monitoring PM2.5 in 2006). The CAQI characterises a city by a roadside and urban background situation. It also insists on a minimum number of pollutants to be included in the calculation. Both were deemed necessary to improve the basis for comparing one city to another. A sensitivity analysis demonstrates the comparative behaviour of the street and urban background stations and presents the sensitivity of the CAQI outcome to the pollutants included in its calculation. © 2013.

  19. Comprehensive genetic testing for female and male infertility using next-generation sequencing.

    PubMed

    Patel, Bonny; Parets, Sasha; Akana, Matthew; Kellogg, Gregory; Jansen, Michael; Chang, Chihyu; Cai, Ying; Fox, Rebecca; Niknazar, Mohammad; Shraga, Roman; Hunter, Colby; Pollock, Andrew; Wisotzkey, Robert; Jaremko, Malgorzata; Bisignano, Alex; Puig, Oscar

    2018-05-19

    To develop a comprehensive genetic test for female and male infertility in support of medical decisions during assisted reproductive technology (ART) protocols. We developed a next-generation sequencing (NGS) gene panel consisting of 87 genes including promoters, 5' and 3' untranslated regions, exons, and selected introns. In addition, sex chromosome aneuploidies and Y chromosome microdeletions were analyzed concomitantly using the same panel. The NGS panel was analytically validated by retrospective analysis of 118 genomic DNA samples with known variants in loci representative of female and male infertility. Our results showed analytical accuracy of > 99%, with > 98% sensitivity for single-nucleotide variants (SNVs) and > 91% sensitivity for insertions/deletions (indels). Clinical sensitivity was assessed with samples containing variants representative of male and female infertility, and it was 100% for SNVs/indels, CFTR IVS8-5T variants, sex chromosome aneuploidies, and copy number variants (CNVs) and > 93% for Y chromosome microdeletions. Cost analysis shows potential savings when comparing this single NGS assay with the standard approach, which includes multiple assays. A single, comprehensive, NGS panel can simplify the ordering process for healthcare providers, reduce turnaround time, and lower the overall cost of testing for genetic assessment of infertility in females and males, while maintaining accuracy.

  20. Quantitative analysis of iris parameters in keratoconus patients using optical coherence tomography.

    PubMed

    Bonfadini, Gustavo; Arora, Karun; Vianna, Lucas M; Campos, Mauro; Friedman, David; Muñoz, Beatriz; Jun, Albert S

    2015-01-01

    To investigate the relationship between quantitative iris parameters and the presence of keratoconus. Cross-sectional observational study that included 15 affected eyes of 15 patients with keratoconus and 26 eyes of 26 normal age- and sex-matched controls. Iris parameters (area, thickness, and pupil diameter) of affected and unaffected eyes were measured under standardized light and dark conditions using anterior segment optical coherence tomography (AS-OCT). To identify optimal iris thickness cutoff points to maximize the sensitivity and specificity when discriminating keratoconus eyes from normal eyes, the analysis included the use of receiver operating characteristic (ROC) curves. Iris thickness and area were lower in keratoconus eyes than in normal eyes. The mean thickness at the pupillary margin under both light and dark conditions was found to be the best parameter for discriminating normal patients from keratoconus patients. Diagnostic performance was assessed by the area under the ROC curve (AROC), which had a value of 0.8256 with 80.0% sensitivity and 84.6% specificity, using a cutoff of 0.4125 mm. The sensitivity increased to 86.7% when a cutoff of 0.4700 mm was used. In our sample, iris thickness was lower in keratoconus eyes than in normal eyes. These results suggest that tomographic parameters may provide novel adjunct approaches for keratoconus screening.

  1. How Efficacious is Danshen (Salvia miltiorrhiza) Dripping Pill in Treating Angina Pectoris? Evidence Assessment for Meta-Analysis of Randomized Controlled Trials.

    PubMed

    Jia, Yongliang; Leung, Siu-Wai

    2017-09-01

    More than 230 randomized controlled trials (RCTs) of danshen dripping pill (DSP) and isosorbide dinitrate (ISDN) in treating angina pectoris after the first preferred reporting items for systematic reviews and meta-analyses-compliant comprehensive meta-analysis were published in 2010. Other meta-analyses had flaws in study selection, statistical meta-analysis, and evidence assessment. This study completed the meta-analysis with an extensive assessment of the evidence. RCTs published from 1994 to 2016 on DSP and ISDN in treating angina pectoris for at least 4 weeks were included. The risk of bias (RoB) of included RCTs was assessed with the Cochrane's tool for assessing RoB. Meta-analyses based on a random-effects model were performed on two outcome measures: symptomatic (SYM) and electrocardiography (ECG) improvements. Subgroup analysis, sensitivity analysis, metaregression, and publication bias analysis were also conducted. The evidence strength was evaluated with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) method. Among the included 109 RCTs with 11,973 participants, 49 RCTs and 5042 participants were new (after 2010). The RoB of included RCTs was high in randomization and blinding. Overall effect sizes in odds ratios for DSP over ISDN were 2.94 (95% confidence interval [CI]: 2.53-3.41) on SYM (n = 108) and 2.37 (95% CI: 2.08-2.69) by ECG (n = 81) with significant heterogeneities (I 2  = 41%, p < 0.0001 on SYM and I 2  = 44%, p < 0.0001 on ECG). Subgroup, sensitivity, and metaregression analyses showed consistent results without publication bias. However, the evidence strength was low in GRADE. The efficacy of DSP was still better than ISDN in treating angina pectoris, but the confidence decreased due to high RoB and heterogeneities.

  2. EZSCAN for undiagnosed type 2 diabetes mellitus: A systematic review and meta-analysis.

    PubMed

    Bernabe-Ortiz, Antonio; Ruiz-Alejos, Andrea; Miranda, J Jaime; Mathur, Rohini; Perel, Pablo; Smeeth, Liam

    2017-01-01

    The EZSCAN is a non-invasive device that, by evaluating sweat gland function, may detect subjects with type 2 diabetes mellitus (T2DM). The aim of the study was to conduct a systematic review and meta-analysis including studies assessing the performance of the EZSCAN for detecting cases of undiagnosed T2DM. We searched for observational studies including diagnostic accuracy and performance results assessing EZSCAN for detecting cases of undiagnosed T2DM. OVID (Medline, Embase, Global Health), CINAHL and SCOPUS databases, plus secondary resources, were searched until March 29, 2017. The following keywords were utilized for the systematic searching: type 2 diabetes mellitus, hyperglycemia, EZSCAN, SUDOSCAN, and sudomotor function. Two investigators extracted the information for meta-analysis and assessed the quality of the data using the Revised Version of the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) checklist. Pooled estimates were obtained by fitting the logistic-normal random-effects model without covariates but random intercepts and using the Freeman-Tukey Arcsine Transformation to stabilize variances. Heterogeneity was also assessed using the I2 measure. Four studies (n = 7,720) were included, three of them used oral glucose tolerance test as the gold standard. Using Hierarchical Summary Receiver Operating Characteristic model, summary sensitivity was 72.0% (95%CI: 60.0%- 83.0%), whereas specificity was 56.0% (95%CI: 38.0%- 74.0%). Studies were very heterogeneous (I2 for sensitivity: 79.2% and for specificity: 99.1%) regarding the inclusion criteria and bias was present mainly due to participants selection. The sensitivity of EZSCAN for detecting cases of undiagnosed T2DM seems to be acceptable, but evidence of high heterogeneity and participant selection bias was detected in most of the studies included. More studies are needed to evaluate the performance of the EZSCAN for undiagnosed T2DM screening, especially at the population level.

  3. Diagnostic performance of FDG PET or PET/CT in prosthetic infection after arthroplasty: a meta-analysis.

    PubMed

    Jin, H; Yuan, L; Li, C; Kan, Y; Hao, R; Yang, J

    2014-03-01

    The purpose of this study was to systematically review and perform a meta-analysis of published data regarding the diagnostic performance of positron emission tomography (PET) or PET/computed tomography (PET/CT) in prosthetic infection after arthroplasty. A comprehensive computer literature search of studies published through May 31, 2012 regarding PET or PET/CT in patients suspicious of prosthetic infection was performed in PubMed/MEDLINE, Embase and Scopus databases. Pooled sensitivity and specificity of PET or PET/CT in patients suspicious of prosthetic infection on a per prosthesis-based analysis were calculated. The area under the receiver-operating characteristic (ROC) curve was calculated to measure the accuracy of PET or PET/CT in patients with suspicious of prosthetic infection. Fourteen studies comprising 838 prosthesis with suspicious of prosthetic infection after arthroplasty were included in this meta-analysis. The pooled sensitivity of PET or PET/CT in detecting prosthetic infection was 86% (95% confidence interval [CI] 82-90%) on a per prosthesis-based analysis. The pooled specificity of PET or PET/CT in detecting prosthetic infection was 86% (95% CI 83-89%) on a per prosthesis-based analysis. The area under the ROC curve was 0.93 on a per prosthesis-based analysis. In patients suspicious of prosthetic infection, FDG PET or PET/CT demonstrated high sensitivity and specificity. FDG PET or PET/CT are accurate methods in this setting. Nevertheless, possible sources of false positive results and influcing factors should kept in mind.

  4. A comprehensive probabilistic analysis model of oil pipelines network based on Bayesian network

    NASA Astrophysics Data System (ADS)

    Zhang, C.; Qin, T. X.; Jiang, B.; Huang, C.

    2018-02-01

    Oil pipelines network is one of the most important facilities of energy transportation. But oil pipelines network accident may result in serious disasters. Some analysis models for these accidents have been established mainly based on three methods, including event-tree, accident simulation and Bayesian network. Among these methods, Bayesian network is suitable for probabilistic analysis. But not all the important influencing factors are considered and the deployment rule of the factors has not been established. This paper proposed a probabilistic analysis model of oil pipelines network based on Bayesian network. Most of the important influencing factors, including the key environment condition and emergency response are considered in this model. Moreover, the paper also introduces a deployment rule for these factors. The model can be used in probabilistic analysis and sensitive analysis of oil pipelines network accident.

  5. High Resolution Melting Analysis for JAK2 Exon 14 and Exon 12 Mutations

    PubMed Central

    Rapado, Inmaculada; Grande, Silvia; Albizua, Enriqueta; Ayala, Rosa; Hernández, José-Angel; Gallardo, Miguel; Gilsanz, Florinda; Martinez-Lopez, Joaquin

    2009-01-01

    JAK2 mutations are important criteria for the diagnosis of Philadelphia chromosome-negative myeloproliferative neoplasms. We aimed to assess JAK2 exon 14 and exon 12 mutations by high-resolution melting (HRM) analysis, which allows variation screening. The exon 14 analysis included 163 patients with polycythemia vera, secondary erythrocytoses, essential thrombocythemia, or secondary thrombocytoses, and 126 healthy subjects. The study of exon 12 included 40 JAK2 V617F-negative patients (nine of which had polycythemia vera, and 31 with splanchnic vein thrombosis) and 30 healthy subjects. HRM analyses of JAK2 exons 14 and 12 gave analytical sensitivities near 1% and both intra- and interday coefficients of variation of less than 1%. For HRM analysis of JAK2 exon 14 in polycythemia vera and essential thrombocythemia, clinical sensitivities were 93.5% and 67.9%, clinical specificities were 98.8% and 97.0%, positive predictive values were 93.5% and 79.2%, and negative predictive values were 98.8% and 94.6, respectively. Correlations were observed between the results from HRM and three commonly used analytical methods. The JAK2 exon 12 HRM results agreed completely with those from sequencing analysis, and the three mutations in exon 12 were detected by both methods. Hence, HRM analysis of exons 14 and 12 in JAK2 shows better diagnostic values than three other routinely used methods against which it was compared. In addition, HRM analysis has the advantage of detecting unknown mutations. PMID:19225136

  6. Which Measures of Online Control Are Least Sensitive to Offline Processes?

    PubMed

    de Grosbois, John; Tremblay, Luc

    2018-02-28

    A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.

  7. The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis

    PubMed Central

    Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.

    2016-01-01

    Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580

  8. Segmentation of epidermal tissue with histopathological damage in images of haematoxylin and eosin stained human skin

    PubMed Central

    2014-01-01

    Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154

  9. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  10. Diagnostic accuracy of tests to detect Hepatitis C antibody: a meta-analysis and review of the literature.

    PubMed

    Tang, Weiming; Chen, Wen; Amini, Ali; Boeras, Debi; Falconer, Jane; Kelly, Helen; Peeling, Rosanna; Varsaneux, Olivia; Tucker, Joseph D; Easterbrook, Philippa

    2017-11-01

    Although direct-acting antivirals can achieve sustained virological response rates greater than 90% in Hepatitis C Virus (HCV) infected persons, at present the majority of HCV-infected individuals remain undiagnosed and therefore untreated. While there are a wide range of HCV serological tests available, there is a lack of formal assessment of their diagnostic performance. We undertook a systematic review and meta-analysis to evaluate he diagnostic accuracy of available rapid diagnostic tests (RDT) and laboratory based EIA assays in detecting antibodies to HCV. We used the PRISMA checklist and Cochrane guidance to develop our search protocol. The search strategy was registered in PROSPERO (CRD42015023567). The search focused on hepatitis C, diagnostic tests, and diagnostic accuracy within eight databases (MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials, Science Citation Index Expanded, Conference Proceedings Citation Index-Science, SCOPUS, Literatura Latino-Americana e do Caribe em Ciências da Saúde and WHO Global Index Medicus. Studies were included if they evaluated an assay to determine the sensitivity and specificity of HCV antibody (HCV Ab) in humans. Two reviewers independently extracted data and performed a quality assessment of the studies using the QUADAS tool. We pooled test estimates using the DerSimonian-Laird method, by using the software R and RevMan. 5.3. A total of 52 studies were identified that included 52,673 unique test measurements. Based on five studies, the pooled sensitivity and specificity of HCV Ab rapid diagnostic tests (RDTs) were 98% (95% CI 98-100%) and 100% (95% CI 100-100%) compared to an enzyme immunoassay (EIA) reference standard. High HCV Ab RDTs sensitivity and specificity were observed across screening populations (general population, high risk populations, and hospital patients) using different reference standards (EIA, nucleic acid testing, immunoblot). There were insufficient studies to undertake subanalyses based on HIV co-infection. Oral HCV Ab RDTs also had excellent sensitivity and specificity compared to blood reference tests, respectively at 94% (95% CI 93-96%) and 100% (95% CI 100-100%). Among studies that assessed individual oral RDTs, the eight studies revealed that OraQuick ADVANCE® had a slightly higher sensitivity (98%, 95% CI 97-98%) compared to the other oral brands (pooled sensitivity: 88%, 95% CI 84-92%). RDTs, including oral tests, have excellent sensitivity and specificity compared to laboratory-based methods for HCV antibody detection across a wide range of settings. Oral HCV Ab RDTs had good sensitivity and specificity compared to blood reference standards.

  11. The comprehensive summary of surgical versus non-surgical treatment for obesity: a systematic review and meta-analysis of randomized controlled trials.

    PubMed

    Cheng, Ji; Gao, Jinbo; Shuai, Xiaoming; Wang, Guobin; Tao, Kaixiong

    2016-06-28

    Bariatric surgery has emerged as a competitive strategy for obese patients. However, its comparative efficacy against non-surgical treatments remains ill-defined, especially among nonseverely obese crowds. Therefore, we implemented a systematic review and meta-analysis in order for an academic addition to current literatures. Literatures were retrieved from databases of PubMed, Web of Science, EMBASE and Cochrane Library. Randomized trials comparing surgical with non-surgical therapies for obesity were included. A Revised Jadad's Scale and Risk of Bias Summary were employed for methodological assessment. Subgroups analysis, sensitivity analysis and publication bias assessment were respectively performed in order to find out the source of heterogeneity, detect the outcome stability and potential publication bias. 25 randomized trials were eligibly included, totally comprising of 1194 participants. Both groups displayed well comparability concerning baseline parameters (P > 0.05). The pooled results of primary endpoints (weight loss and diabetic remission) revealed a significant advantage among surgical patients rather than those receiving non-surgical treatments (P < 0.05). Furthermore, except for certain cardiovascular indicators, bariatric surgery was superior to conventional arms in terms of metabolic secondary parameters (P < 0.05). Additionally, the pooled outcomes were confirmed to be stable by sensitivity analysis. Although Egger's test (P < 0.01) and Begg's test (P<0.05) had reported the presence of publication bias among included studies, "Trim-and-Fill" method verified that the pooled outcomes remained stable. Bariatric surgery is a better therapeutic option for weight loss, irrespective of follow-up duration, surgical techniques and obesity levels.

  12. Antenatal corticosteroids: analytical decision model and economic analysis in a Brazilian cohort of preterm infants.

    PubMed

    Ogata, Joice Fabiola Meneguel; Fonseca, Marcelo Cunio Machado; de Almeida, Maria Fernanda Branco; Guinsburg, Ruth

    2016-09-01

    To analyze the hospital costs and the effectiveness of antenatal corticosteroid (ACS) therapy in a cohort of Brazilian preterm infants. Infants with gestational age (GA) 26 to 32 weeks, born between 2006 and 2009 in a tertiary university hospital and who survived hospitalization were included. A decision tree was built according to GA (26-27, 28-29, 30-31 and 32 weeks), assuming that each patient exposed or not to ACS may or not develop one of the clinical outcomes included in the model. The cost of each outcome was calculated by microcosting. Sensitivity analysis tested the model stability and calculated outcomes and costs per 1000 patients. The cost-effectiveness analysis indicated that ACS reduced USD 3413 in hospital costs per patient exposed to ACS. Its use decreased oxygen dependency at 36 weeks in 11%, advanced resuscitation in delivery room in 24%, severe peri-intraventricular hemorrhage in 12%, patent ductus arteriosus requiring surgery in 3.6% and retinopathy of prematurity in 0.3%, but increased the probability of late-onset sepsis in 2.5%. The sensitivity analysis indicated that ACS was dominant over no ACS therapy for most outcomes. The results indicate that ACS therapy decreases costs and severe neonatal outcomes of preterm infants.

  13. Cost-effectiveness analysis of the most common orthopaedic surgery procedures: knee arthroscopy and knee anterior cruciate ligament reconstruction.

    PubMed

    Lubowitz, James H; Appleby, David

    2011-10-01

    The purpose of this study was to determine the cost-effectiveness of knee arthroscopy and anterior cruciate ligament (ACL) reconstruction. Retrospective analysis of prospectively collected data from a single-surgeon, institutional review board-approved outcomes registry included 2 cohorts: surgically treated knee arthroscopy and ACL reconstruction patients. Our outcome measure is cost-effectiveness (cost of a quality-adjusted life-year [QALY]). The QALY is calculated by multiplying difference in health-related quality of life, before and after treatment, by life expectancy. Health-related quality of life is measured by use of the Quality of Well-Being scale, which has been validated for cost-effectiveness analysis. Costs are facility charges per the facility cost-to-charges ratio plus surgeon fee. Sensitivity analyses are performed to determine the effect of variations in costs or outcomes. There were 93 knee arthroscopy and 35 ACL reconstruction patients included at a mean follow-up of 2.1 years. Cost per QALY was $5,783 for arthroscopy and $10,326 for ACL reconstruction (2009 US dollars). Sensitivity analysis shows that our results are robust (relatively insensitive) to variations in costs or outcomes. Knee arthroscopy and knee ACL reconstruction are very cost-effective. Copyright © 2011 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  14. Performance of blend sign in predicting hematoma expansion in intracerebral hemorrhage: A meta-analysis.

    PubMed

    Yu, Zhiyuan; Zheng, Jun; Guo, Rui; Ma, Lu; Li, Mou; Wang, Xiaoze; Lin, Sen; Li, Hao; You, Chao

    2017-12-01

    Hematoma expansion is independently associated with poor outcome in intracerebral hemorrhage (ICH). Blend sign is a simple predictor for hematoma expansion on non-contrast computed tomography. However, its accuracy for predicting hematoma expansion is inconsistent in previous studies. This meta-analysis is aimed to systematically assess the performance of blend sign in predicting hematoma expansion in ICH. A systematic literature search was conducted. Original studies about predictive accuracy of blend sign for hematoma expansion in ICH were included. Pooled sensitivity, specificity, positive and negative likelihood ratios were calculated. Summary receiver operating characteristics curve was constructed. Publication bias was assessed by Deeks' funnel plot asymmetry test. A total of 5 studies with 2248 patients were included in this meta-analysis. The pooled sensitivity, specificity, positive and negative likelihood ratios of blend sign for predicting hematoma expansion were 0.28, 0.92, 3.4 and 0.78, respectively. The area under the curve (AUC) was 0.85. No significant publication bias was found. This meta-analysis demonstrates that blend sign is a useful predictor with high specificity for hematoma expansion in ICH. Further studies with larger sample size are still necessary to verify the accuracy of blend sign for predicting hematoma expansion. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Analysis of world terror networks from the reduced Google matrix of Wikipedia

    NASA Astrophysics Data System (ADS)

    El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2018-01-01

    We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.

  16. A sensitivity analysis of volcanic aerosol dispersion in the stratosphere. [Mt. Fuego, Guatemala eruptions

    NASA Technical Reports Server (NTRS)

    Butler, C. F.

    1979-01-01

    A computer sensitivity analysis was performed to determine the uncertainties involved in the calculation of volcanic aerosol dispersion in the stratosphere using a 2 dimensional model. The Fuego volcanic event of 1974 was used. Aerosol dispersion processes that were included are: transport, sedimentation, gas phase sulfur chemistry, and aerosol growth. Calculated uncertainties are established from variations in the stratospheric aerosol layer decay times at 37 latitude for each dispersion process. Model profiles are also compared with lidar measurements. Results of the computer study are quite sensitive (factor of 2) to the assumed volcanic aerosol source function and the large variations in the parameterized transport between 15 and 20 km at subtropical latitudes. Sedimentation effects are uncertain by up to a factor of 1.5 because of the lack of aerosol size distribution data. The aerosol chemistry and growth, assuming that the stated mechanisms are correct, are essentially complete in several months after the eruption and cannot explain the differences between measured and modeled results.

  17. Headspace-SPME-GC/MS as a simple cleanup tool for sensitive 2,6-diisopropylphenol analysis from lipid emulsions and adaptable to other matrices.

    PubMed

    Pickl, Karin E; Adamek, Viktor; Gorges, Roland; Sinner, Frank M

    2011-07-15

    Due to increased regulatory requirements, the interaction of active pharmaceutical ingredients with various surfaces and solutions during production and storage is gaining interest in the pharmaceutical research field, in particular with respect to development of new formulations, new packaging material and the evaluation of cleaning processes. Experimental adsorption/absorption studies as well as the study of cleaning processes require sophisticated analytical methods with high sensitivity for the drug of interest. In the case of 2,6-diisopropylphenol - a small lipophilic drug which is typically formulated as lipid emulsion for intravenous injection - a highly sensitive method in the concentration range of μg/l suitable to be applied to a variety of different sample matrices including lipid emulsions is needed. We hereby present a headspace-solid phase microextraction (HS-SPME) approach as a simple cleanup procedure for sensitive 2,6-diisopropylphenol quantification from diverse matrices choosing a lipid emulsion as the most challenging matrix with regard to complexity. By combining the simple and straight forward HS-SPME sample pretreatment with an optimized GC-MS quantification method a robust and sensitive method for 2,6-diisopropylphenol was developed. This method shows excellent sensitivity in the low μg/l concentration range (5-200μg/l), good accuracy (94.8-98.8%) and precision (intraday-precision 0.1-9.2%, inter-day precision 2.0-7.7%). The method can be easily adapted to other, less complex, matrices such as water or swab extracts. Hence, the presented method holds the potential to serve as a single and simple analytical procedure for 2,6-diisopropylphenol analysis in various types of samples such as required in, e.g. adsorption/absorption studies which typically deal with a variety of different surfaces (steel, plastic, glass, etc.) and solutions/matrices including lipid emulsions. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Highly sensitive index of sympathetic activity based on time-frequency spectral analysis of electrodermal activity.

    PubMed

    Posada-Quintero, Hugo F; Florian, John P; Orjuela-Cañón, Álvaro D; Chon, Ki H

    2016-09-01

    Time-domain indices of electrodermal activity (EDA) have been used as a marker of sympathetic tone. However, they often show high variation between subjects and low consistency, which has precluded their general use as a marker of sympathetic tone. To examine whether power spectral density analysis of EDA can provide more consistent results, we recently performed a variety of sympathetic tone-evoking experiments (43). We found significant increase in the spectral power in the frequency range of 0.045 to 0.25 Hz when sympathetic tone-evoking stimuli were induced. The sympathetic tone assessed by the power spectral density of EDA was found to have lower variation and more sensitivity for certain, but not all, stimuli compared with the time-domain analysis of EDA. We surmise that this lack of sensitivity in certain sympathetic tone-inducing conditions with time-invariant spectral analysis of EDA may lie in its inability to characterize time-varying dynamics of the sympathetic tone. To overcome the disadvantages of time-domain and time-invariant power spectral indices of EDA, we developed a highly sensitive index of sympathetic tone, based on time-frequency analysis of EDA signals. Its efficacy was tested using experiments designed to elicit sympathetic dynamics. Twelve subjects underwent four tests known to elicit sympathetic tone arousal: cold pressor, tilt table, stand test, and the Stroop task. We hypothesize that a more sensitive measure of sympathetic control can be developed using time-varying spectral analysis. Variable frequency complex demodulation, a recently developed technique for time-frequency analysis, was used to obtain spectral amplitudes associated with EDA. We found that the time-varying spectral frequency band 0.08-0.24 Hz was most responsive to stimulation. Spectral power for frequencies higher than 0.24 Hz were determined to be not related to the sympathetic dynamics because they comprised less than 5% of the total power. The mean value of time-varying spectral amplitudes in the frequency band 0.08-0.24 Hz were used as the index of sympathetic tone, termed TVSymp. TVSymp was found to be overall the most sensitive to the stimuli, as evidenced by a low coefficient of variation (0.54), and higher consistency (intra-class correlation, 0.96) and sensitivity (Youden's index > 0.75), area under the receiver operating characteristic (ROC) curve (>0.8, accuracy > 0.88) compared with time-domain and time-invariant spectral indices, including heart rate variability. Copyright © 2016 the American Physiological Society.

  19. Similitude design for the vibration problems of plates and shells: A review

    NASA Astrophysics Data System (ADS)

    Zhu, Yunpeng; Wang, You; Luo, Zhong; Han, Qingkai; Wang, Deyou

    2017-06-01

    Similitude design plays a vital role in the analysis of vibration and shock problems encountered in large engineering equipment. Similitude design, including dimensional analysis and governing equation method, is founded on the dynamic similitude theory. This study reviews the application of similitude design methods in engineering practice and summarizes the major achievements of the dynamic similitude theory in structural vibration and shock problems in different fields, including marine structures, civil engineering structures, and large power equipment. This study also reviews the dynamic similitude design methods for thin-walled and composite material plates and shells, including the most recent work published by the authors. Structure sensitivity analysis is used to evaluate the scaling factors to attain accurate distorted scaling laws. Finally, this study discusses the existing problems and the potential of the dynamic similitude theory for the analysis of vibration and shock problems of structures.

  20. Analysis of feline and canine allergen components in patients sensitized to pets.

    PubMed

    Ukleja-Sokołowska, Natalia; Gawrońska-Ukleja, Ewa; Żbikowska-Gotz, Magdalena; Socha, Ewa; Lis, Kinga; Sokołowski, Łukasz; Kuźmiński, Andrzej; Bartuzi, Zbigniew

    2016-01-01

    Component resolved allergen diagnosis allows for a precise evaluation of the sensitization profiles of patients sensitized to felines and canines. An accurate interpretation of these results allows better insight into the evolution of a given patients sensitizations, and allows for a more precise evaluation of their prognoses. 70 patients (42 women and 28 men, aged 18-65, with the average of 35.5) with a positive feline or canine allergy diagnosis were included in the research group. 30 patients with a negative allergy diagnosis were included in the control group. The total IgE levels of all patients with allergies as well as their allergen-specific IgE to feline and canine allergens were measured. Specific IgE levels to canine (Can f 1, Can f 2, Can f 3, Can f 5) and feline (Fel d 1, Fel d 2, Fel d 4) allergen components were also measured with the use of the ImmunoCap method. Monosensitization for only one canine or feline component was found in 30% of patients. As predicted, the main feline allergen was Fel d 1, which sensitized as many as 93.9% of patients sensitized to felines. Among 65 patients sensitized to at least one feline component, for 30 patients (46.2%) the only sensitizing feline component was Fel d 1. Only 19 patients in that group (63.3%) were not simultaneously sensitized to dogs and 11 (36.7%), the isolated sensitization to feline Fel d 1 notwithstanding, displayed concurrent sensitizations to one of the canine allergen components. Fel d 4 sensitized 49.2% of the research group.64.3% of patients sensitized to canine components had heightened levels of specific IgE to Can f 1. Monosensitization in that group occurred for 32.1% of the patients. Sensitization to Can f 5 was observed among 52.4% of the patients. Concurrent sensitizations to a few allergic components, not only cross-reactive but also originating in different protein families, are a significant problem for patients sensitized to animals.

  1. Spectrometer gun

    DOEpatents

    Waechter, D.A.; Wolf, M.A.; Umbarger, C.J.

    1981-11-03

    A hand-holdable, battery-operated, microprocessor-based spectrometer gun is described that includes a low-power matrix display and sufficient memory to permit both real-time observation and extended analysis of detected radiation pulses. Universality of the incorporated signal processing circuitry permits operation with various detectors having differing pulse detection and sensitivity parameters.

  2. Autonomous Mars ascent and orbit rendezvous for earth return missions

    NASA Technical Reports Server (NTRS)

    Edwards, H. C.; Balmanno, W. F.; Cruz, Manuel I.; Ilgen, Marc R.

    1991-01-01

    The details of tha assessment of autonomous Mars ascent and orbit rendezvous for earth return missions are presented. Analyses addressing navigation system assessments, trajectory planning, targeting approaches, flight control guidance strategies, and performance sensitivities are included. Tradeoffs in the analysis and design process are discussed.

  3. Targeted Quantitation of Proteins by Mass Spectrometry

    PubMed Central

    2013-01-01

    Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement. PMID:23517332

  4. Analysis of Transition-Sensitized Turbulent Transport Equations

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Thacker, William D.; Gatski, Thomas B.; Grosch, Chester E,

    2005-01-01

    The dynamics of an ensemble of linear disturbances in boundary-layer flows at various Reynolds numbers is studied through an analysis of the transport equations for the mean disturbance kinetic energy and energy dissipation rate. Effects of adverse and favorable pressure-gradients on the disturbance dynamics are also included in the analysis Unlike the fully turbulent regime where nonlinear phase scrambling of the fluctuations affects the flow field even in proximity to the wall, the early stage transition regime fluctuations studied here are influenced cross the boundary layer by the solid boundary. The dominating dynamics in the disturbance kinetic energy and dissipation rate equations are described. These results are then used to formulate transition-sensitized turbulent transport equations, which are solved in a two-step process and applied to zero-pressure-gradient flow over a flat plate. Computed results are in good agreement with experimental data.

  5. Test and Analysis of a Buckling-Critical Large-Scale Sandwich Composite Cylinder

    NASA Technical Reports Server (NTRS)

    Schultz, Marc R.; Sleight, David W.; Gardner, Nathaniel W.; Rudd, Michelle T.; Hilburger, Mark W.; Palm, Tod E.; Oldfield, Nathan J.

    2018-01-01

    Structural stability is an important design consideration for launch-vehicle shell structures and it is well known that the buckling response of such shell structures can be very sensitive to small geometric imperfections. As part of an effort to develop new buckling design guidelines for sandwich composite cylindrical shells, an 8-ft-diameter honeycomb-core sandwich composite cylinder was tested under pure axial compression to failure. The results from this test are compared with finite-element-analysis predictions and overall agreement was very good. In particular, the predicted buckling load was within 1% of the test and the character of the response matched well. However, it was found that the agreement could be improved by including composite material nonlinearity in the analysis, and that the predicted buckling initiation site was sensitive to the addition of small bending loads to the primary axial load in analyses.

  6. Targeted quantitation of proteins by mass spectrometry.

    PubMed

    Liebler, Daniel C; Zimmerman, Lisa J

    2013-06-04

    Quantitative measurement of proteins is one of the most fundamental analytical tasks in a biochemistry laboratory, but widely used immunochemical methods often have limited specificity and high measurement variation. In this review, we discuss applications of multiple-reaction monitoring (MRM) mass spectrometry, which allows sensitive, precise quantitative analyses of peptides and the proteins from which they are derived. Systematic development of MRM assays is permitted by databases of peptide mass spectra and sequences, software tools for analysis design and data analysis, and rapid evolution of tandem mass spectrometer technology. Key advantages of MRM assays are the ability to target specific peptide sequences, including variants and modified forms, and the capacity for multiplexing that allows analysis of dozens to hundreds of peptides. Different quantitative standardization methods provide options that balance precision, sensitivity, and assay cost. Targeted protein quantitation by MRM and related mass spectrometry methods can advance biochemistry by transforming approaches to protein measurement.

  7. Sensitivity Analysis of earth and environmental models: a systematic review to guide scientific advancement

    NASA Astrophysics Data System (ADS)

    Wagener, Thorsten; Pianosi, Francesca

    2016-04-01

    Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.

  8. Documentation for a Structural Optimization Procedure Developed Using the Engineering Analysis Language (EAL)

    NASA Technical Reports Server (NTRS)

    Martin, Carl J., Jr.

    1996-01-01

    This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.

  9. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  10. A theoretical-experimental methodology for assessing the sensitivity of biomedical spectral imaging platforms, assays, and analysis methods.

    PubMed

    Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C

    2018-01-01

    Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Study of Multimission Modular Spacecraft (MMS) propulsion requirements

    NASA Technical Reports Server (NTRS)

    Fischer, N. H.; Tischer, A. E.

    1977-01-01

    The cost effectiveness of various propulsion technologies for shuttle-launched multimission modular spacecraft (MMS) missions was determined with special attention to the potential role of ion propulsion. The primary criterion chosen for comparison for the different types of propulsion technologies was the total propulsion related cost, including the Shuttle charges, propulsion module costs, upper stage costs, and propulsion module development. In addition to the cost comparison, other criteria such as reliability, risk, and STS compatibility are examined. Topics covered include MMS mission models, propulsion technology definition, trajectory/performance analysis, cost assessment, program evaluation, sensitivity analysis, and conclusions and recommendations.

  12. SPS market analysis. [small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    Goff, H. C.

    1980-01-01

    A market analysis task included personal interviews by GE personnel and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective small solar thermal power systems (SPS) users. Over 500 firms were contacted, including three ownership classes of electric utilities, industrial firms in the top SIC codes for energy consumption, and design engineering firms. A market demand model was developed which utilizes the data base developed by personal interviews and surveys, and projected energy price and consumption data to perform sensitivity analyses and estimate potential markets for SPS.

  13. A new image encryption algorithm based on the fractional-order hyperchaotic Lorenz system

    NASA Astrophysics Data System (ADS)

    Wang, Zhen; Huang, Xia; Li, Yu-Xia; Song, Xiao-Na

    2013-01-01

    We propose a new image encryption algorithm on the basis of the fractional-order hyperchaotic Lorenz system. While in the process of generating a key stream, the system parameters and the derivative order are embedded in the proposed algorithm to enhance the security. Such an algorithm is detailed in terms of security analyses, including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. The experimental results demonstrate that the proposed image encryption scheme has the advantages of large key space and high security for practical image encryption.

  14. Gas stream analysis using voltage-current time differential operation of electrochemical sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woo, Leta Yar-Li; Glass, Robert Scott; Fitzpatrick, Joseph Jay

    A method for analysis of a gas stream. The method includes identifying an affected region of an affected waveform signal corresponding to at least one characteristic of the gas stream. The method also includes calculating a voltage-current time differential between the affected region of the affected waveform signal and a corresponding region of an original waveform signal. The affected region and the corresponding region of the waveform signals have a sensitivity specific to the at least one characteristic of the gas stream. The method also includes generating a value for the at least one characteristic of the gas stream basedmore » on the calculated voltage-current time differential.« less

  15. A 3-Year Study of Predictive Factors for Positive and Negative Appendicectomies.

    PubMed

    Chang, Dwayne T S; Maluda, Melissa; Lee, Lisa; Premaratne, Chandrasiri; Khamhing, Srisongham

    2018-03-06

    Early and accurate identification or exclusion of acute appendicitis is the key to avoid the morbidity of delayed treatment for true appendicitis or unnecessary appendicectomy, respectively. We aim (i) to identify potential predictive factors for positive and negative appendicectomies; and (ii) to analyse the use of ultrasound scans (US) and computed tomography (CT) scans for acute appendicitis. All appendicectomies that took place at our hospital from the 1st of January 2013 to the 31st of December 2015 were retrospectively recorded. Test results of potential predictive factors of acute appendicitis were recorded. Statistical analysis was performed using Fisher exact test, logistic regression analysis, sensitivity, specificity, and positive and negative predictive values calculation. 208 patients were included in this study. 184 patients had histologically proven acute appendicitis. The other 24 patients had either nonappendicitis pathology or normal appendix. Logistic regression analysis showed statistically significant associations between appendicitis and white cell count, neutrophil count, C-reactive protein, and bilirubin. Neutrophil count was the test with the highest sensitivity and negative predictive values, whereas bilirubin was the test with the highest specificity and positive predictive values (PPV). US and CT scans had high sensitivity and PPV for diagnosing appendicitis. No single test was sufficient to diagnose or exclude acute appendicitis by itself. Combining tests with high sensitivity (abnormal neutrophil count, and US and CT scans) and high specificity (raised bilirubin) may predict acute appendicitis more accurately.

  16. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  17. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  18. Sensitivity analysis of a wing aeroelastic response

    NASA Technical Reports Server (NTRS)

    Kapania, Rakesh K.; Eldred, Lloyd B.; Barthelemy, Jean-Francois M.

    1991-01-01

    A variation of Sobieski's Global Sensitivity Equations (GSE) approach is implemented to obtain the sensitivity of the static aeroelastic response of a three-dimensional wing model. The formulation is quite general and accepts any aerodynamics and structural analysis capability. An interface code is written to convert one analysis's output to the other's input, and visa versa. Local sensitivity derivatives are calculated by either analytic methods or finite difference techniques. A program to combine the local sensitivities, such as the sensitivity of the stiffness matrix or the aerodynamic kernel matrix, into global sensitivity derivatives is developed. The aerodynamic analysis package FAST, using a lifting surface theory, and a structural package, ELAPS, implementing Giles' equivalent plate model are used.

  19. Diagnostic accuracy of contrast-enhanced ultrasound in assessing the therapeutic response to radio frequency ablation for liver tumors: systematic review and meta-analysis.

    PubMed

    Xuan, Min; Zhou, Fengsheng; Ding, Yan; Zhu, Qiaoying; Dong, Ji; Zhou, Hao; Cheng, Jun; Jiang, Xiao; Wu, Pengxi

    2018-04-01

    To review the diagnostic accuracy of contrast-enhanced ultrasound (CEUS) used to detect residual or recurrent liver tumors after radiofrequency ablation (RFA). This technique uses contrast-enhanced computer tomography or/and contrast-enhanced magnetic resonance imaging as the gold standard of investigation. MEDLINE, EMBASE, and COCHRANE were systematically searched for all potentially eligible studies comparing CEUS with the reference standard that follows RFA. Risk of bias and applicability concerns were addressed by adopting the Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2) tool. Pooled point estimates for sensitivity, specificity, positive and negative likelihood ratios, and diagnostic odds ratios (DOR) with 95% CI were computed before plotting the sROC (summary receiver operating characteristic) curve. Meta-regression and subgroup analysis were used to identify the source of the heterogeneity that was detected. Publication bias was evaluated using Deeks' funnel plot asymmetry test. Ten eligible studies on 1162 lesions that occurred between 2001 and 2016 were included in the final analysis. The quality of the included studies assessed by the QUADAS-2 tool was considered reasonable. The pooled sensitivity and specificity of CEUS in detecting residual or recurrent liver tumors had the following values: 0.90 (95% CI 0.85-0.94) and 1.00 (95% CI 0.99-1.00), respectively. Overall DOR was 420.10 (95% CI 142.30-1240.20). The sources of heterogeneity could not be precisely identified by meta-regression or subgroup analysis. No evidence of publication bias was found. This study confirmed that CEUS exhibits high sensitivity and specificity in assessing therapeutic responses to RFA for liver tumors.

  20. Cost-effectiveness analysis of left atrial appendage occlusion compared with pharmacological strategies for stroke prevention in atrial fibrillation.

    PubMed

    Lee, Vivian Wing-Yan; Tsai, Ronald Bing-Ching; Chow, Ines Hang-Iao; Yan, Bryan Ping-Yen; Kaya, Mehmet Gungor; Park, Jai-Wun; Lam, Yat-Yin

    2016-08-31

    Transcatheter left atrial appendage occlusion (LAAO) is a promising therapy for stroke prophylaxis in non-valvular atrial fibrillation (NVAF) but its cost-effectiveness remains understudied. This study evaluated the cost-effectiveness of LAAO for stroke prophylaxis in NVAF. A Markov decision analytic model was used to compare the cost-effectiveness of LAAO with 7 pharmacological strategies: aspirin alone, clopidogrel plus aspirin, warfarin, dabigatran 110 mg, dabigatran 150 mg, apixaban, and rivaroxaban. Outcome measures included quality-adjusted life years (QALYs), lifetime costs and incremental cost-effectiveness ratios (ICERs). Base-case data were derived from ACTIVE, RE-LY, ARISTOTLE, ROCKET-AF, PROTECT-AF and PREVAIL trials. One-way sensitivity analysis varied by CHADS2 score, HAS-BLED score, time horizons, and LAAO costs; and probabilistic sensitivity analysis using 10,000 Monte Carlo simulations was conducted to assess parameter uncertainty. LAAO was considered cost-effective compared with aspirin, clopidogrel plus aspirin, and warfarin, with ICER of US$5,115, $2,447, and $6,298 per QALY gained, respectively. LAAO was dominant (i.e. less costly but more effective) compared to other strategies. Sensitivity analysis demonstrated favorable ICERs of LAAO against other strategies in varied CHADS2 score, HAS-BLED score, time horizons (5 to 15 years) and LAAO costs. LAAO was cost-effective in 86.24 % of 10,000 simulations using a threshold of US$50,000/QALY. Transcatheter LAAO is cost-effective for prevention of stroke in NVAF compared with 7 pharmacological strategies. The transcatheter left atrial appendage occlusion (LAAO) is considered cost-effective against the standard 7 oral pharmacological strategies including acetylsalicylic acid (ASA) alone, clopidogrel plus ASA, warfarin, dabigatran 110 mg, dabigatran 150 mg, apixaban, and rivaroxaban for stroke prophylaxis in non-valvular atrial fibrillation management.

  1. Bronchial and non-bronchial systemic arteries: value of multidetector CT angiography in diagnosis and angiographic embolisation feasibility analysis.

    PubMed

    Lin, Yuning; Chen, Ziqian; Yang, Xizhang; Zhong, Qun; Zhang, Hongwen; Yang, Li; Xu, Shangwen; Li, Hui

    2013-12-01

    The aim of this study is to evaluate the diagnostic performance of multidetector CT angiography (CTA) in depicting bronchial and non-bronchial systemic arteries in patients with haemoptysis and to assess whether this modality helps determine the feasibility of angiographic embolisation. Fifty-two patients with haemoptysis between January 2010 and July 2011 underwent both preoperative multidetector CTA and digital subtraction angiography (DSA) imaging. Diagnostic performance of CTA in depicting arteries causing haemoptysis was assessed on a per-patient and a per-artery basis. The feasibility of the endovascular treatment evaluated by CTA was analysed. Sensitivity, specificity, and positive and negative predictive values for those analyses were determined. Fifty patients were included in the artery-presence-number analysis. In the per-patient analysis, neither CTA (P = 0.25) nor DSA (P = 1.00) showed statistical difference in the detection of arteries causing haemoptysis. The sensitivity, specificity, and positive and negative predictive values were 94%, 100%, 100%, and 40%, respectively, for the presence of pathologic arteries evaluated by CTA, and 98%, 100%, 100%, and 67%, respectively, for DSA. On the per-artery basis, CTA correctly identified 97% (107/110). Fifty-two patients were included in the feasibility analysis. The performance of CTA in predicting the feasibility of angiographic embolisation was not statistically different from the treatment performed (P = 1.00). The sensitivity, specificity, and positive and negative predictive values were 96%, 80%, 98% and 67%, respectively, for CTA. Multidetector CTA is an accurate imaging method in depicting the presence and number of arteries causing haemoptysis. This modality is also useful for determining the feasibility of angiographic embolisation for haemoptysis. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  2. Analysis of Bisphenol A, Alkylphenols, and Alkylphenol Ethoxylates in NIST SRM 2585 and Indoor House Dust by Gas Chromatography-Tandem Mass Spectrometry (GC/MS/MS).

    PubMed

    Fan, Xinghua; Kubwabo, Cariton; Wu, Fang; Rasmussen, Pat E

    2018-06-26

    Background: Ingestion of house dust has been demonstrated to be an important exposure pathway to several contaminants in young children. These compounds include bisphenol A (BPA), alkylphenols (APs), and alkylphenol ethoxylates (APEOs). Analysis of these compounds in house dust is challenging because of the complex composition of the sample matrix. Objective: The objective was to develop a simple and sensitive method to measure BPA, APs, and APEOs in indoor house dust. Methods: An integrated method that involved solvent extraction using sonication, sample cleanup by solid-phase extraction, derivatization by 2,2,2-trifluoro- N -methyl- N -(trimethylsilyl)acetamide, and analysis by GC coupled with tandem MS was developed for the simultaneous determination of BPA, APs, and APEOs in NIST Standard Reference Material (SRM) 2585 (Organic contaminants in house dust) and in settled house dust samples. Results: Target analytes included BPA, 4- tert -octylphenol (OP), OP monoethoxylate, OP diethoxylate, 4- n -nonylphenol (4 n NP), 4 n NP monoethoxylate (4 n NP 1 EO), branched nonylphenol (NP), NP monoethoxylate, NP diethoxylate, NP triethoxylate, and NP tetraethoxylate. The method was sensitive, with method detection limits ranging from 0.05 to 5.1 μg/g, and average recoveries between 82 and 115%. All target analytes were detected in SRM 2585 and house dust except 4 n NP and 4 n NP 1 EO. Conclusions: The method is simple and fast, with high sensitivity and good reproducibility. It is applicable to the analysis of target analytes in similar matrixes, such as sediments, soil, and biosolids. Highlights: Values measured in SRM 2585 will be useful for future research in method development and method comparison.

  3. Accuracy of Presurgical Functional MR Imaging for Language Mapping of Brain Tumors: A Systematic Review and Meta-Analysis.

    PubMed

    Weng, Hsu-Huei; Noll, Kyle R; Johnson, Jason M; Prabhu, Sujit S; Tsai, Yuan-Hsiung; Chang, Sheng-Wei; Huang, Yen-Chu; Lee, Jiann-Der; Yang, Jen-Tsung; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh; Hazle, John D; Schomer, Donald F; Liu, Ho-Ling

    2018-02-01

    Purpose To compare functional magnetic resonance (MR) imaging for language mapping (hereafter, language functional MR imaging) with direct cortical stimulation (DCS) in patients with brain tumors and to assess factors associated with its accuracy. Materials and Methods PubMed/MEDLINE and related databases were searched for research articles published between January 2000 and September 2016. Findings were pooled by using bivariate random-effects and hierarchic summary receiver operating characteristic curve models. Meta-regression and subgroup analyses were performed to evaluate whether publication year, functional MR imaging paradigm, magnetic field strength, statistical threshold, and analysis software affected classification accuracy. Results Ten articles with a total of 214 patients were included in the analysis. On a per-patient basis, the pooled sensitivity and specificity of functional MR imaging was 44% (95% confidence interval [CI]: 14%, 78%) and 80% (95% CI: 54%, 93%), respectively. On a per-tag basis (ie, each DCS stimulation site or "tag" was considered a separate data point across all patients), the pooled sensitivity and specificity were 67% (95% CI: 51%, 80%) and 55% (95% CI: 25%, 82%), respectively. The per-tag analysis showed significantly higher sensitivity for studies with shorter functional MR imaging session times (P = .03) and relaxed statistical threshold (P = .05). Significantly higher specificity was found when expressive language task (P = .02), longer functional MR imaging session times (P < .01), visual presentation of stimuli (P = .04), and stringent statistical threshold (P = .01) were used. Conclusion Results of this study showed moderate accuracy of language functional MR imaging when compared with intraoperative DCS, and the included studies displayed significant methodologic heterogeneity. © RSNA, 2017 Online supplemental material is available for this article.

  4. Comparison of next-generation sequencing mutation profiling with BRAF and IDH1 mutation-specific immunohistochemistry.

    PubMed

    Jabbar, Kausar J; Luthra, Rajalakshmi; Patel, Keyur P; Singh, Rajesh R; Goswami, Rashmi; Aldape, Ken D; Medeiros, L Jeffrey; Routbort, Mark J

    2015-04-01

    Mutation-specific antibodies for BRAF V600E and IDH1 R132H offer convenient immunohistochemical (IHC) assays to detect these mutations in tumors. Previous studies using these antibodies have shown high sensitivity and specificity, but use in routine diagnosis with qualitative assessment has not been well studied. In this retrospective study, we reviewed BRAF and IDH1 mutation-specific IHC results compared with separately obtained clinical next-generation sequencing results. For 67 tumors with combined IDH1 IHC and mutation data, IHC was unequivocally reported as positive or negative in all cases. Sensitivity of IHC for IDH1 R132H was 98% and specificity was 100% compared with mutation status. Four IHC-negative samples showed non-R132H IDH1 mutations including R132C, R132G, and P127T. For 128 tumors with combined BRAF IHC and mutation data, IHC was positive in 33, negative in 82, and equivocal in 13 tumors. The sensitivity of IHC was 97% and specificity was 99% when including only unequivocally positive or negative results. If equivocal IHC cases were included in the analysis as negative, sensitivity fell to 81%. If equivocal cases were classified as positive, specificity dropped to 91%. Eight IHC-negative samples showed non-V600E BRAF mutations including V600K, N581I, V600M, and K601E. We conclude that IHC for BRAF V600E and IDH1 R132H is relatively sensitive and specific, but there is a discordance rate that is not trivial. In addition, a significant proportion of patients harbor BRAF non-V600E or IDH1 non-R132H mutations not detectable by IHC, potentially limiting utility of IHC screening for BRAF and IDH1 mutations.

  5. Salmonella testing of pooled pre-enrichment broth cultures for screening multiple food samples.

    PubMed

    Price, W R; Olsen, R A; Hunter, J E

    1972-04-01

    A method has been described for testing multiple food samples for Salmonella without loss in sensitivity. The method pools multiple pre-enrichment broth cultures into single enrichment broths. The subsequent stages of the Salmonella analysis are not altered. The method was found applicable to several dry food materials including nonfat dry milk, dried egg albumin, cocoa, cottonseed flour, wheat flour, and shredded coconut. As many as 25 pre-enrichment broth cultures were pooled without apparent loss in the sensitivity of Salmonella detection as compared to individual sample analysis. The procedure offers a simple, yet effective, way to increase sample capacity in the Salmonella testing of foods, particularly where a large proportion of samples ordinarily is negative. It also permits small portions of pre-enrichment broth cultures to be retained for subsequent individual analysis if positive tests are found. Salmonella testing of pooled pre-enrichment broths provides increased consumer protection for a given amount of analytical effort as compared to individual sample analysis.

  6. A comprehensive evaluation of various sensitivity analysis methods: A case study with a hydrological model

    DOE PAGES

    Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...

    2014-01-01

    Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less

  7. UTI diagnosis and antibiogram using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Kastanos, Evdokia; Kyriakides, Alexandros; Hadjigeorgiou, Katerina; Pitris, Constantinos

    2009-07-01

    Urinary tract infection diagnosis and antibiogram require a 48 hour waiting period using conventional methods. This results in ineffective treatments, increased costs and most importantly in increased resistance to antibiotics. In this work, a novel method for classifying bacteria and determining their sensitivity to an antibiotic using Raman spectroscopy is described. Raman spectra of three species of gram negative Enterobacteria, most commonly responsible for urinary tract infections, were collected. The study included 25 samples each of E.coli, Klebsiella p. and Proteus spp. A novel algorithm based on spectral ratios followed by discriminant analysis resulted in classification with over 94% accuracy. Sensitivity and specificity for the three types of bacteria ranged from 88-100%. For the development of an antibiogram, bacterial samples were treated with the antibiotic ciprofloxacin to which they were all sensitive. Sensitivity to the antibiotic was evident after analysis of the Raman signatures of bacteria treated or not treated with this antibiotic as early as two hours after exposure. This technique can lead to the development of new technology for urinary tract infection diagnosis and antibiogram with same day results, bypassing urine cultures and avoiding all undesirable consequences of current practice.

  8. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  9. Urinary tract infection diagnosis and response to antibiotics using Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Kastanos, Evdokia; Kyriakides, Alexandros; Hadjigeorgiou, Katerina; Pitris, Constantinos

    2009-02-01

    Urinary tract infection diagnosis and antibiogram require a 48 hour waiting period using conventional methods. This results in ineffective treatments, increased costs and most importantly in increased resistance to antibiotics. In this work, a novel method for classifying bacteria and determining their sensitivity to an antibiotic using Raman spectroscopy is described. Raman spectra of three species of gram negative Enterobacteria, most commonly responsible for urinary tract infections, were collected. The study included 25 samples each of E.coli, Klebsiella p. and Proteus spp. A novel algorithm based on spectral ratios followed by discriminant analysis resulted in classification with over 94% accuracy. Sensitivity and specificity for the three types of bacteria ranged from 88-100%. For the development of an antibiogram, bacterial samples were treated with the antibiotic ciprofloxacin to which they were all sensitive. Sensitivity to the antibiotic was evident after analysis of the Raman signatures of bacteria treated or not treated with this antibiotic as early as two hours after exposure. This technique can lead to the development of new technology for urinary tract infection diagnosis and antibiogram with same day results, bypassing urine cultures and avoiding all undesirable consequences of current practice.

  10. Assessing direct analysis in real-time-mass spectrometry (DART-MS) for the rapid identification of additives in food packaging.

    PubMed

    Ackerman, L K; Noonan, G O; Begley, T H

    2009-12-01

    The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.

  11. Partial pressure analysis in space testing

    NASA Technical Reports Server (NTRS)

    Tilford, Charles R.

    1994-01-01

    For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.

  12. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method (ESQM v5.2)

    NASA Astrophysics Data System (ADS)

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-12-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.

  13. Pyrotechnic hazards classification and evaluation program. Electrostatic vulnerability of the E8 and XM15/XM165 clusters, phase 2

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation into the electrostatic phenomena associated with the manufacturing and handling of explosives is discussed. The testing includes measurement of the severity of the primary charge generation mechanism, triboelectric effects between dissimilar surfaces; refinement of equivalent circuits of the XM15/XM165 and E8 fuse trains; evaluation of the electrostatic spark discharge characteristics predicted by an equivalent circuit analysis; and determination of the spark ignition sensitivity of materials, components, junctions, and subassemblies which compose the XM15/XM165 and E8 units. Special studies were also performed. These special tests included ignition sensitivity of the complete XM15 fuse train when subjected to discharges through its entire length, measurement of electrostatic potentials which occur during the E8 foaming operation during fabrication, and investigation of the inadvertent functioning of an XM15 cluster during manufacturing. The test results are discussed and related to the effectiveness of suggested modification to reduce the electrostatic ignition sensitivity.

  14. Verification bias: an under-recognized source of error in assessing the efficacy of MRI of the meniscii.

    PubMed

    Richardson, Michael L; Petscavage, Jonelle M

    2011-11-01

    The sensitivity and specificity of magnetic resonance imaging (MRI) for diagnosis of meniscal tears has been studied extensively, with tears usually verified by surgery. However, surgically unverified cases are often not considered in these studies, leading to verification bias, which can falsely increase the sensitivity and decrease the specificity estimates. Our study suggests that such bias may be very common in the meniscal MRI literature, and illustrates techniques to detect and correct for such bias. PubMed was searched for articles estimating sensitivity and specificity of MRI for meniscal tears. These were assessed for verification bias, deemed potentially present if a study included any patients whose MRI findings were not surgically verified. Retrospective global sensitivity analysis (GSA) was performed when possible. Thirty-nine of the 314 studies retrieved from PubMed specifically dealt with meniscal tears. All 39 included unverified patients, and hence, potential verification bias. Only seven articles included sufficient information to perform GSA. Of these, one showed definite verification bias, two showed no bias, and four others showed bias within certain ranges of disease prevalence. Only 9 of 39 acknowledged the possibility of verification bias. Verification bias is underrecognized and potentially common in published estimates of the sensitivity and specificity of MRI for the diagnosis of meniscal tears. When possible, it should be avoided by proper study design. If unavoidable, it should be acknowledged. Investigators should tabulate unverified as well as verified data. Finally, verification bias should be estimated; if present, corrected estimates of sensitivity and specificity should be used. Our online web-based calculator makes this process relatively easy. Copyright © 2011 AUR. Published by Elsevier Inc. All rights reserved.

  15. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    NASA Astrophysics Data System (ADS)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  16. Ecosystem Services and Climate Change Considerations for ...

    EPA Pesticide Factsheets

    Freshwater habitats provide fishable, swimmable and drinkable resources and are a nexus of geophysical and biological processes. These processes in turn influence the persistence and sustainability of populations, communities and ecosystems. Climate change and landuse change encompass numerous stressors of potential exposure, including the introduction of toxic contaminants, invasive species, and disease in addition to physical drivers such as temperature and hydrologic regime. A systems approach that includes the scientific and technologic basis of assessing the health of ecosystems is needed to effectively protect human health and the environment. The Integrated Environmental Modeling Framework “iemWatersheds” has been developed as a consistent and coherent means of forecasting the cumulative impact of co-occurring stressors. The Framework consists of three facilitating technologies: Data for Environmental Modeling (D4EM) that automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems (FRAMES) that manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) that provides post-processing and analysis of model outputs, including uncertainty and sensitivity analysis. Five models are linked within the Framework to provide multimedia simulation capabilities for hydrology and water quality processes: the Soil Water

  17. The performance of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury: a meta-analysis.

    PubMed

    Wang, Z X; Chen, S L; Wang, Q Q; Liu, B; Zhu, J; Shen, J

    2015-06-01

    The aim of this study was to evaluate the accuracy of magnetic resonance imaging in the detection of triangular fibrocartilage complex injury through a meta-analysis. A comprehensive literature search was conducted before 1 April 2014. All studies comparing magnetic resonance imaging results with arthroscopy or open surgery findings were reviewed, and 25 studies that satisfied the eligibility criteria were included. Data were pooled to yield pooled sensitivity and specificity, which were respectively 0.83 and 0.82. In detection of central and peripheral tears, magnetic resonance imaging had respectively a pooled sensitivity of 0.90 and 0.88 and a pooled specificity of 0.97 and 0.97. Six high-quality studies using Ringler's recommended magnetic resonance imaging parameters were selected for analysis to determine whether optimal imaging protocols yielded better results. The pooled sensitivity and specificity of these six studies were 0.92 and 0.82, respectively. The overall accuracy of magnetic resonance imaging was acceptable. For peripheral tears, the pooled data showed a relatively high accuracy. Magnetic resonance imaging with appropriate parameters are an ideal method for diagnosing different types of triangular fibrocartilage complex tears. © The Author(s) 2015.

  18. Damage sensitivity investigations of EMI technique on different materials through coupled field analysis

    NASA Astrophysics Data System (ADS)

    Joshi, Bhrigu; Adhikari, Sailesh; Bhalla, Suresh

    2016-04-01

    This paper presents a comparative study through the piezoelectric coupled field analysis mode of finite element method (FEM) on detection of damages of varying magnitude, encompassing three different types of structural materials, using piezo impedance transducers. An aluminum block, a concrete block and a steel block of dimensions 48×48×10 mm were modelled in finite element software ANSYS. A PZT patch of 10×10×0.3 mm was also included in the model as surface bonded on the block. Coupled field analysis (CFA) was performed to obtain the admittance signatures of the piezo sensor in the frequency range of 0-250 kHz. The root mean square deviation (RMSD) index was employed to quantify the degree of variation of the signatures. It was found that concrete exhibited deviation in the signatures only with the change of damping values. However, the other two materials showed variation in the signatures even with changes in density and elasticity values in a small portion of the specimen. The comparative study shows that the PZT patches are more sensitive to damage detection in materials with low damping and the sensitivity typically decreases with increase in the damping.

  19. The clinical role of microRNA-21 as a promising biomarker in the diagnosis and prognosis of colorectal cancer: a systematic review and meta-analysis.

    PubMed

    Peng, Qiliang; Zhang, Xueli; Min, Ming; Zou, Li; Shen, Peipei; Zhu, Yaqun

    2017-07-04

    This systematic analysis aimed to investigate the value of microRNA-21 (miR-21) in colorectal cancer for multiple purposes, including diagnosis and prognosis, as well as its predictive power in combination biomarkers. Fifty-seven eligible studies were included in our meta-analysis, including 25 studies for diagnostic meta-analysis and 32 for prognostic meta-analysis. For the diagnostic meta-analysis of miR-21 alone, the overall pooled results for sensitivity, specificity, and area under the curve (AUC) were 0.64 (95% CI: 0.53-0.74), 0.85 (0.79-0.90), and 0.85 (0.81-0.87), respectively. Circulating samples presented corresponding values of 0.72 (0.63-0.79), 0.84 (0.78-0.89), and 0.86 (0.83-0.89), respectively. For the diagnostic meta-analysis of miR-21-related combination biomarkers, the above three parameters were 0.79 (0.69-0.86), 0.79 (0.68-0.87), and 0.86 (0.83-0.89), respectively. Notably, subgroup analysis suggested that miRNA combination markers in circulation exhibited high predictive power, with sensitivity of 0.85 (0.70-0.93), specificity of 0.86 (0.77-0.92), and AUC of 0.92 (0.89-0.94). For the prognostic meta-analysis, patients with higher expression of miR-21 had significant shorter disease-free survival [DFS; pooled hazard ratio (HR): 1.60; 95% CI: 1.20-2.15] and overall survival (OS; 1.54; 1.27-1.86). The combined HR in tissues for DFS and OS were 1.76 (1.31-2.36) and 1.58 (1.30-1.93), respectively. Our comprehensive systematic review revealed that circulating miR-21 may be suitable as a diagnostic biomarker, while tissue miR-21 could be a prognostic marker for colorectal cancer. In addition, miRNA combination biomarkers may provide a new approach for clinical application.

  20. Sensitivity and Specificity of CT and Its signs for Diagnosis of Strangulation in Patients with Acute Small Bowel Obstruction.

    PubMed

    Jha, Ashwini Kumar; Tang, Wen Hao; Bai, Zhi Bin; Xiao, Jia Quan

    2014-01-01

    To perform a meta-analysis to review the sensitivity and specificity of computed tomography and different known computed yomography signs for the diagnosis of strangulation in patients with acute small bowel obstruction. A comprehensive Pubmed search was performed for all reports that evaluated the use of CT and discussed different CT criteria for the diagnosis of acute SBO. Articles published in English language from January 1978 to June 2008 were included. Review articles, case reports, pictorial essays and articles without original data were excluded. The bivariate random effect model was used to obtain pooled sensitivity and pooled specificity. Summary receiver operating curve was calculated using Meta-Disc. Software Openbugs 3.0.3 was used to summarize the data. A total of 12 studies fulfilled the inclusion criteria. The pooled sensitivity and specificity of CT in the diagnosis of strangulation was 0.720 (95% CI 0.674 to 0.763) and 0.866 (95% CI 0.837 to 0.892) respectively. Among different CT signs, mesenteric edema had highest Pooled sensitivity of 0. 741 and lack of bowel wall enhancement had highest pooled specificity of 0.991. This review demonstrates that CT is highly sensitive as well as specific in the preoperative diagnosis of strangulation SBO which are in accordance with the published studies. Our analysis also shows that "presence of mesenteric fluid" is most sensitive, and "lack of bowel wall enhancement" is most specific CT sign of strangulation, and also justifies need of large scale prospective studies to validate the results obtained as well as to determine a clinical protocol.

  1. A sensitivity analysis for a thermomechanical model of the Antarctic ice sheet and ice shelves

    NASA Astrophysics Data System (ADS)

    Baratelli, F.; Castellani, G.; Vassena, C.; Giudici, M.

    2012-04-01

    The outcomes of an ice sheet model depend on a number of parameters and physical quantities which are often estimated with large uncertainty, because of lack of sufficient experimental measurements in such remote environments. Therefore, the efforts to improve the accuracy of the predictions of ice sheet models by including more physical processes and interactions with atmosphere, hydrosphere and lithosphere can be affected by the inaccuracy of the fundamental input data. A sensitivity analysis can help to understand which are the input data that most affect the different predictions of the model. In this context, a finite difference thermomechanical ice sheet model based on the Shallow-Ice Approximation (SIA) and on the Shallow-Shelf Approximation (SSA) has been developed and applied for the simulation of the evolution of the Antarctic ice sheet and ice shelves for the last 200 000 years. The sensitivity analysis of the model outcomes (e.g., the volume of the ice sheet and of the ice shelves, the basal melt rate of the ice sheet, the mean velocity of the Ross and Ronne-Filchner ice shelves, the wet area at the base of the ice sheet) with respect to the model parameters (e.g., the basal sliding coefficient, the geothermal heat flux, the present-day surface accumulation and temperature, the mean ice shelves viscosity, the melt rate at the base of the ice shelves) has been performed by computing three synthetic numerical indices: two local sensitivity indices and a global sensitivity index. Local sensitivity indices imply a linearization of the model and neglect both non-linear and joint effects of the parameters. The global variance-based sensitivity index, instead, takes into account the complete variability of the input parameters but is usually conducted with a Monte Carlo approach which is computationally very demanding for non-linear complex models. Therefore, the global sensitivity index has been computed using a development of the model outputs in a neighborhood of the reference parameter values with a second-order approximation. The comparison of the three sensitivity indices proved that the approximation of the non-linear model with a second-order expansion is sufficient to show some differences between the local and the global indices. As a general result, the sensitivity analysis showed that most of the model outcomes are mainly sensitive to the present-day surface temperature and accumulation, which, in principle, can be measured more easily (e.g., with remote sensing techniques) than the other input parameters considered. On the other hand, the parameters to which the model resulted less sensitive are the basal sliding coefficient and the mean ice shelves viscosity.

  2. 77 FR 16225 - Ramey Motors, Inc.; Analysis of Proposed Consent Order to Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-20

    ..., like anyone's Social Security number, date of birth, driver's license number or other state... include competitively sensitive information such as costs, sales statistics, inventories, formulas..., the rate be stated as an ``annual percentage rate'' using that term or the abbreviation ``APR.'' In...

  3. New Method for Analysis of Multiple Anthelmintic Residues in Animal Tissue

    USDA-ARS?s Scientific Manuscript database

    For the first time, 39 of the major anthelmintics can be detected in one rapid and sensitive LC-MS/MS method, including the flukicides, which have been generally overlooked in surveillance programs. Utilizing the QuEChERS approach, residues were extracted from liver and milk using acetonitrile, sod...

  4. High Sensitivity NMR and Mixture Analysis for Nematode Behavioral Metabolomics

    USDA-ARS?s Scientific Manuscript database

    Nematodes are the most abundant animal on earth, and they parasitize virtually all plants and animals. Caenorhabditis elegans is a free-living nematode that lives in soil and composting material. We have shown that C. elegans releases at least 40 small molecules into its environment including many...

  5. Iron Status in Toddlerhood Predicts Sensitivity to Psychostimulants in Children

    ERIC Educational Resources Information Center

    Turner, Catharyn A.; Xie, Diqiong; Zimmerman, Bridget M.; Calarge, Chadi A.

    2012-01-01

    Objective: Iron deficiency is associated with impaired dopaminergic signaling and externalizing behavior. The authors examine, whether iron stores in toddlerhood influence later response to psychostimulants. Method: Youth participating in a study monitoring the long-term safety of risperidone were included in this analysis if they had received…

  6. Low cost charged-coupled device (CCD) based detectors for Shiga toxins activity analysis

    USDA-ARS?s Scientific Manuscript database

    To improve food safety there is a need to develop simple, low-cost sensitive devices for detection of foodborne pathogens and their toxins. We describe a simple and relatively low-cost webcam-based detector which can be used for various optical detection modalities, including fluorescence, chemilumi...

  7. HIGHLY SENSITIVE DIOXIN IMMUNOASSAY AND ITS APPLICATIONS TO SOIL AND BIOTA SAMPLES. (R825433)

    EPA Science Inventory

    Tetrachlorodibenzo-p-dioxin (TCDD) is a well-known highly toxic compound that is present in nearly all components of the global ecosystem, including air, soil, sediment, fish and humans. Dioxin analysis is equipment intensive and expensive requiring low ppt or even ppq ...

  8. 77 FR 13326 - Carpenter Technology Corporation and Latrobe Specialty Metals, Inc.; Analysis of Proposed...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-06

    ... established sales and marketing network in the United States that will allow it to be immediately competitive... making sure that your comment does not include any sensitive personal information, like anyone's Social... information such as costs, sales statistics, inventories, formulas, patterns, devices, manufacturing processes...

  9. 78 FR 7431 - Cbr Systems, Inc.; Analysis of Proposed Consent Order To Aid Public Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-01

    ... include any sensitive personal information, like anyone's Social Security number, date of birth, driver's... limited to, the following: name, address, email address, telephone number, date of birth, Social Security... collects personal information, such as fathers' Social Security numbers, and the company collects...

  10. Development of Water Quality Index for the United States: A Sensitivity Analysis

    EPA Science Inventory

    Background: Water quality is quantified using several measures, available from various data sources, which can be combined to create a single index of overall water quality. It is necessary to identify appropriate variables to include in an index which could be used for health re...

  11. Prevalence of and risk factors for latex sensitization in patients with spina bifida.

    PubMed

    Bernardini, R; Novembre, E; Lombardi, E; Mezzetti, P; Cianferoni, A; Danti, A D; Mercurella, A; Vierucci, A

    1998-11-01

    We determined the prevalence of and risk factors for latex sensitization in patients with spina bifida. A total of 59 consecutive subjects 2 to 40 years old with spina bifida answered a questionnaire, and underwent a latex skin prick test and determination of serum IgE specific for latex by RAST CAP radioimmunoassay. We also noted the relationships of total serum IgE skin prick tests to common air and food allergens. In addition, skin prick plus prick tests were also done with fresh foods, including kiwi, pear, orange, almond, pineapple, apple, tomato and banana. Latex sensitization was present in 15 patients (25%) according to the presence of IgE specific to latex, as detected by a skin prick test in 9 and/or RAST CAP in 13. Five latex sensitized patients (33.3%) had clinical manifestations, such as urticaria, conjuctivitis, angioedema, rhinitis and bronchial asthma, while using a latex glove and inflating a latex balloon. Atopy was present in 21 patients (35.6%). In 14 patients (23%) 1 or more skin tests were positive for fresh foods using a prick plus prick technique. Tomato, kiwi, and pear were the most common skin test positive foods. Univariate analysis revealed that a history of 5 or more operations, atopy and positive prick plus prick tests results for pear and kiwi were significantly associated with latex sensitization. Multivariate analysis demonstrated that only atopy and a history of 5 or more operations were significantly and independently associated with latex sensitization. A fourth of the patients with spina bifida were sensitized to latex. Atopy and an elevated number of operations were significant and independent predictors of latex sensitization in these cases.

  12. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cacuci, Dan G.; Favorite, Jeffrey A.

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  13. Second-Order Sensitivity Analysis of Uncollided Particle Contributions to Radiation Detector Responses

    DOE PAGES

    Cacuci, Dan G.; Favorite, Jeffrey A.

    2018-04-06

    This work presents an application of Cacuci’s Second-Order Adjoint Sensitivity Analysis Methodology (2nd-ASAM) to the simplified Boltzmann equation that models the transport of uncollided particles through a medium to compute efficiently and exactly all of the first- and second-order derivatives (sensitivities) of a detector’s response with respect to the system’s isotopic number densities, microscopic cross sections, source emission rates, and detector response function. The off-the-shelf PARTISN multigroup discrete ordinates code is employed to solve the equations underlying the 2nd-ASAM. The accuracy of the results produced using PARTISN is verified by using the results of three test configurations: (1) a homogeneousmore » sphere, for which the response is the exactly known total uncollided leakage, (2) a multiregion two-dimensional (r-z) cylinder, and (3) a two-region sphere for which the response is a reaction rate. For the homogeneous sphere, results for the total leakage as well as for the respective first- and second-order sensitivities are in excellent agreement with the exact benchmark values. For the nonanalytic problems, the results obtained by applying the 2nd-ASAM to compute sensitivities are in excellent agreement with central-difference estimates. The efficiency of the 2nd-ASAM is underscored by the fact that, for the cylinder, only 12 adjoint PARTISN computations were required by the 2nd-ASAM to compute all of the benchmark’s 18 first-order sensitivities and 224 second-order sensitivities, in contrast to the 877 PARTISN calculations needed to compute the respective sensitivities using central finite differences, and this number does not include the additional calculations that were required to find appropriate values of the perturbations to use for the central differences.« less

  14. Prospective trial evaluating the sensitivity and specificity of 3,4-dihydroxy-6-[18F]-fluoro-L-phenylalanine (18F-DOPA) PET and MRI in patients with recurrent gliomas.

    PubMed

    Youland, Ryan S; Pafundi, Deanna H; Brinkmann, Debra H; Lowe, Val J; Morris, Jonathan M; Kemp, Bradley J; Hunt, Christopher H; Giannini, Caterina; Parney, Ian F; Laack, Nadia N

    2018-05-01

    Treatment-related changes can be difficult to differentiate from progressive glioma using MRI with contrast (CE). The purpose of this study is to compare the sensitivity and specificity of 18F-DOPA-PET and MRI in patients with recurrent glioma. Thirteen patients with MRI findings suspicious for recurrent glioma were prospectively enrolled and underwent 18F-DOPA-PET and MRI for neurosurgical planning. Stereotactic biopsies were obtained from regions of concordant and discordant PET and MRI CE, all within regions of T2/FLAIR signal hyperintensity. The sensitivity and specificity of 18F-DOPA-PET and CE were calculated based on histopathologic analysis. Receiver operating characteristic curve analysis revealed optimal tumor to normal (T/N) and SUVmax thresholds. In the 37 specimens obtained, 51% exhibited MRI contrast enhancement (M+) and 78% demonstrated 18F-DOPA-PET avidity (P+). Imaging characteristics included M-P- in 16%, M-P+ in 32%, M+P+ in 46% and M+P- in 5%. Histopathologic review of biopsies revealed grade II components in 16%, grade III in 43%, grade IV in 30% and no tumor in 11%. MRI CE sensitivity for recurrent tumor was 52% and specificity was 50%. PET sensitivity for tumor was 82% and specificity was 50%. A T/N threshold > 2.0 altered sensitivity to 76% and specificity to 100% and SUVmax > 1.36 improved sensitivity and specificity to 94 and 75%, respectively. 18F-DOPA-PET can provide increased sensitivity and specificity compared with MRI CE for visualizing the spatial distribution of recurrent gliomas. Future studies will incorporate 18F-DOPA-PET into re-irradiation target volume delineation for RT planning.

  15. Optical coherence tomography in the diagnosis of dysplasia and adenocarcinoma in Barret's esophagus

    NASA Astrophysics Data System (ADS)

    Gladkova, N. D.; Zagaynova, E. V.; Zuccaro, G.; Kareta, M. V.; Feldchtein, F. I.; Balalaeva, I. V.; Balandina, E. B.

    2007-02-01

    Statistical analysis of endoscopic optical coherence tomography (EOCT) surveillance of 78 patients with Barrett's esophagus (BE) is presented in this study. The sensitivity of OCT device in retrospective open detection of early malignancy (including high grade dysplasia and intramucosal adenocarcinoma (IMAC)) was 75%, specificity 82%, diagnostic accuracy - 80%, positive predictive value- 60%, negative predictive value- 87%. In the open recognition of IMAC sensitivity was 81% and specificity were 85% each. Results of a blind recognition with the same material were similar: sensitivity - 77%, specificity 85%, diagnostic accuracy - 82%, positive predictive value- 70%, negative predictive value- 87%. As the endoscopic detection of early malignancy is problematic, OCT holds great promise in enhancing the diagnostic capability of clinical GI endoscopy.

  16. Cost-effectiveness analysis of a randomized trial comparing care models for chronic kidney disease.

    PubMed

    Hopkins, Robert B; Garg, Amit X; Levin, Adeera; Molzahn, Anita; Rigatto, Claudio; Singer, Joel; Soltys, George; Soroka, Steven; Parfrey, Patrick S; Barrett, Brendan J; Goeree, Ron

    2011-06-01

    Potential cost and effectiveness of a nephrologist/nurse-based multifaceted intervention for stage 3 to 4 chronic kidney disease are not known. This study examines the cost-effectiveness of a chronic disease management model for chronic kidney disease. Cost and cost-effectiveness were prospectively gathered alongside a multicenter trial. The Canadian Prevention of Renal and Cardiovascular Endpoints Trial (CanPREVENT) randomized 236 patients to receive usual care (controls) and another 238 patients to multifaceted nurse/nephrologist-supported care that targeted factors associated with development of kidney and cardiovascular disease (intervention). Cost and outcomes over 2 years were examined to determine the incremental cost-effectiveness of the intervention. Base-case analysis included disease-related costs, and sensitivity analysis included all costs. Consideration of all costs produced statistically significant differences. A lower number of days in hospital explained most of the cost difference. For both base-case and sensitivity analyses with all costs included, the intervention group required fewer resources and had higher quality of life. The direction of the results was unchanged to inclusion of various types of costs, consideration of payer or societal perspective, changes to the discount rate, and levels of GFR. The nephrologist/nurse-based multifaceted intervention represents good value for money because it reduces costs without reducing quality of life for patients with chronic kidney disease.

  17. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Introducing uncertainty analysis of nucleation and crystal growth models in Process Analytical Technology (PAT) system design of crystallization processes.

    PubMed

    Samad, Noor Asma Fazli Abdul; Sin, Gürkan; Gernaey, Krist V; Gani, Rafiqul

    2013-11-01

    This paper presents the application of uncertainty and sensitivity analysis as part of a systematic model-based process monitoring and control (PAT) system design framework for crystallization processes. For the uncertainty analysis, the Monte Carlo procedure is used to propagate input uncertainty, while for sensitivity analysis, global methods including the standardized regression coefficients (SRC) and Morris screening are used to identify the most significant parameters. The potassium dihydrogen phosphate (KDP) crystallization process is used as a case study, both in open-loop and closed-loop operation. In the uncertainty analysis, the impact on the predicted output of uncertain parameters related to the nucleation and the crystal growth model has been investigated for both a one- and two-dimensional crystal size distribution (CSD). The open-loop results show that the input uncertainties lead to significant uncertainties on the CSD, with appearance of a secondary peak due to secondary nucleation for both cases. The sensitivity analysis indicated that the most important parameters affecting the CSDs are nucleation order and growth order constants. In the proposed PAT system design (closed-loop), the target CSD variability was successfully reduced compared to the open-loop case, also when considering uncertainty in nucleation and crystal growth model parameters. The latter forms a strong indication of the robustness of the proposed PAT system design in achieving the target CSD and encourages its transfer to full-scale implementation. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Diagnostic Accuracy of Central Venous Catheter Confirmation by Bedside Ultrasound Versus Chest Radiography in Critically Ill Patients: A Systematic Review and Meta-Analysis.

    PubMed

    Ablordeppey, Enyo A; Drewry, Anne M; Beyer, Alexander B; Theodoro, Daniel L; Fowler, Susan A; Fuller, Brian M; Carpenter, Christopher R

    2017-04-01

    We performed a systematic review and meta-analysis to examine the accuracy of bedside ultrasound for confirmation of central venous catheter position and exclusion of pneumothorax compared with chest radiography. PubMed, Embase, Cochrane Central Register of Controlled Trials, reference lists, conference proceedings and ClinicalTrials.gov. Articles and abstracts describing the diagnostic accuracy of bedside ultrasound compared with chest radiography for confirmation of central venous catheters in sufficient detail to reconstruct 2 × 2 contingency tables were reviewed. Primary outcomes included the accuracy of confirming catheter positioning and detecting a pneumothorax. Secondary outcomes included feasibility, interrater reliability, and efficiency to complete bedside ultrasound confirmation of central venous catheter position. Investigators abstracted study details including research design and sonographic imaging technique to detect catheter malposition and procedure-related pneumothorax. Diagnostic accuracy measures included pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio. Fifteen studies with 1,553 central venous catheter placements were identified with a pooled sensitivity and specificity of catheter malposition by ultrasound of 0.82 (0.77-0.86) and 0.98 (0.97-0.99), respectively. The pooled positive and negative likelihood ratios of catheter malposition by ultrasound were 31.12 (14.72-65.78) and 0.25 (0.13-0.47). The sensitivity and specificity of ultrasound for pneumothorax detection was nearly 100% in the participating studies. Bedside ultrasound reduced mean central venous catheter confirmation time by 58.3 minutes. Risk of bias and clinical heterogeneity in the studies were high. Bedside ultrasound is faster than radiography at identifying pneumothorax after central venous catheter insertion. When a central venous catheter malposition exists, bedside ultrasound will identify four out of every five earlier than chest radiography.

  20. Single-tube analysis of DNA methylation with silica superparamagnetic beads.

    PubMed

    Bailey, Vasudev J; Zhang, Yi; Keeley, Brian P; Yin, Chao; Pelosky, Kristen L; Brock, Malcolm; Baylin, Stephen B; Herman, James G; Wang, Tza-Huei

    2010-06-01

    DNA promoter methylation is a signature for the silencing of tumor suppressor genes. Most widely used methods to detect DNA methylation involve 3 separate, independent processes: DNA extraction, bisulfite conversion, and methylation detection via a PCR method, such as methylation-specific PCR (MSP). This method includes many disconnected steps with associated losses of material, potentially reducing the analytical sensitivity required for analysis of challenging clinical samples. Methylation on beads (MOB) is a new technique that integrates DNA extraction, bisulfite conversion, and PCR in a single tube via the use of silica superparamagnetic beads (SSBs) as a common DNA carrier for facilitating cell debris removal and buffer exchange throughout the entire process. In addition, PCR buffer is used to directly elute bisulfite-treated DNA from SSBs for subsequent target amplifications. The diagnostic sensitivity of MOB was evaluated by methylation analysis of the CDKN2A [cyclin-dependent kinase inhibitor 2A (melanoma, p16, inhibits CDK4); also known as p16(INK4a)] promoter in serum DNA of lung cancer patients and compared with that of conventional methods. Methylation analysis consisting of DNA extraction followed by bisulfite conversion and MSP was successfully carried out within 9 h in a single tube. The median pre-PCR DNA yield was 6.61-fold higher with the MOB technique than with conventional techniques. Furthermore, MOB increased the diagnostic sensitivity in our analysis of the CDKN2A promoter in patient serum by successfully detecting methylation in 74% of cancer patients, vs the 45% detection rate obtained with conventional techniques. The MOB technique successfully combined 3 processes into a single tube, thereby allowing ease in handling and an increased detection throughput. The increased pre-PCR yield in MOB allowed efficient, diagnostically sensitive methylation detection.

Top