Sample records for perform sensitivity analyses

  1. SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements

    NASA Technical Reports Server (NTRS)

    Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.

    1977-01-01

    A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.

  2. Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.

    PubMed

    Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim

    2016-04-01

    Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.

  3. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    PubMed

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  4. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance

    PubMed Central

    2015-01-01

    Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553

  5. Sensitivity analyses of stopping distance for connected vehicles at active highway-rail grade crossings.

    PubMed

    Hsu, Chung-Jen; Jones, Elizabeth G

    2017-02-01

    This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Application of global sensitivity analysis methods to Takagi-Sugeno-Kang rainfall-runoff fuzzy models

    NASA Astrophysics Data System (ADS)

    Jacquin, A. P.; Shamseldin, A. Y.

    2009-04-01

    This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.

  7. Probabilistic performance-assessment modeling of the mixed waste landfill at Sandia National Laboratories.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne

    2007-01-01

    A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less

  8. Sensitivity analysis of FeCrAl cladding and U3Si2 fuel under accident conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamble, Kyle Allan Lawrence; Hales, Jason Dean

    2016-08-01

    The purpose of this milestone report is to highlight the results of sensitivity analyses performed on two accident tol- erant fuel concepts: U3Si2 fuel and FeCrAl cladding. The BISON fuel performance code under development at Idaho National Laboratory was coupled to Sandia National Laboratories’ DAKOTA software to perform the sensitivity analyses. Both Loss of Coolant (LOCA) and Station blackout (SBO) scenarios were analyzed using main effects studies. The results indicate that for FeCrAl cladding the input parameters with greatest influence on the output metrics of interest (fuel centerline temperature and cladding hoop strain) during the LOCA were the isotropic swellingmore » and fuel enrichment. For U3Si2 the important inputs were found to be the intergranular diffusion coefficient, specific heat, and fuel thermal conductivity. For the SBO scenario, Young’s modulus was found to be influential in FeCrAl in addition to the isotropic swelling and fuel enrichment. Contrarily to the LOCA case, the specific heat of U3Si2 was found to have no effect during the SBO. The intergranular diffusion coefficient and fuel thermal conductivity were still found to be of importance. The results of the sensitivity analyses have identified areas where further research is required including fission gas behavior in U3Si2 and irradiation swelling in FeCrAl. Moreover, the results highlight the need to perform the sensitivity analyses on full length fuel rods for SBO scenarios.« less

  9. Univariate and bivariate likelihood-based meta-analysis methods performed comparably when marginal sensitivity and specificity were the targets of inference.

    PubMed

    Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H

    2017-03-01

    To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Analytical performance, agreement and user-friendliness of six point-of-care testing urine analysers for urinary tract infection in general practice

    PubMed Central

    Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M

    2015-01-01

    Objective Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. Setting All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Primary and secondary outcome measures Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. Results The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. Conclusions The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. PMID:25986635

  11. Preliminary performance assessment for the Waste Isolation Pilot Plant, December 1992. Volume 5, Uncertainty and sensitivity analyses of gas and brine migration for undisturbed performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1993-08-01

    Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less

  12. Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment

    NASA Technical Reports Server (NTRS)

    Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen

    2009-01-01

    Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.

  13. Pre-study feasibility and identifying sensitivity analyses for protocol pre-specification in comparative effectiveness research.

    PubMed

    Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert

    2014-05-01

    The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results.

  14. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool

    PubMed Central

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-01-01

    Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080

  15. SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.

    PubMed

    Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda

    2008-08-15

    It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.

  16. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 2 - evaluation of rehabilitation fixes (part 2).

    DOT National Transportation Integrated Search

    2013-08-01

    The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...

  17. Analytical performance, agreement and user-friendliness of six point-of-care testing urine analysers for urinary tract infection in general practice.

    PubMed

    Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M

    2015-05-18

    Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Optimum sensitivity derivatives of objective functions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.

    1983-01-01

    The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.

  19. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  20. Identification of mission sensitivities for high-power electric propulsion systems

    NASA Technical Reports Server (NTRS)

    Frisbee, Robert H.; Moeller, Robert C.

    2005-01-01

    This paper presents the results of mission analyses that expose various mission performance sensitivities and system advantages of the ALFA technology for a small but representative subset of nuclear electric propulsion (NEP) missions considered under NASA's Project Prometheus.

  1. Diagnostic Performance of CT for Diagnosis of Fat-Poor Angiomyolipoma in Patients With Renal Masses: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Cho, Jeong Yeon; Kim, Sang Youn; Kim, Seung Hyup

    2017-11-01

    The purpose of this article is to systematically review and perform a meta-analysis of the diagnostic performance of CT for diagnosis of fat-poor angiomyolipoma (AML) in patients with renal masses. MEDLINE and EMBASE were systematically searched up to February 2, 2017. We included diagnostic accuracy studies that used CT for diagnosis of fat-poor AML in patients with renal masses, using pathologic examination as the reference standard. Two independent reviewers assessed the methodologic quality using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. Sensitivity and specificity of included studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Sensitivity analyses using several clinically relevant covariates were performed to explore heterogeneity. Fifteen studies (2258 patients) were included. Pooled sensitivity and specificity were 0.67 (95% CI, 0.48-0.81) and 0.97 (95% CI, 0.89-0.99), respectively. Substantial and considerable heterogeneity was present with regard to sensitivity and specificity (I 2 = 91.21% and 78.53%, respectively). At sensitivity analyses, the specificity estimates were comparable and consistently high across all subgroups (0.93-1.00), but sensitivity estimates showed significant variation (0.14-0.82). Studies using pixel distribution analysis (n = 3) showed substantially lower sensitivity estimates (0.14; 95% CI, 0.04-0.40) compared with the remaining 12 studies (0.81; 95% CI, 0.76-0.85). CT shows moderate sensitivity and excellent specificity for diagnosis of fat-poor AML in patients with renal masses. When methods other than pixel distribution analysis are used, better sensitivity can be achieved.

  2. Sampling and sensitivity analyses tools (SaSAT) for computational modelling

    PubMed Central

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-01-01

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated. PMID:18304361

  3. Sediment delivery modeling in practice: Comparing the effects of watershed characteristics and data resolution across hydroclimatic regions.

    PubMed

    Hamel, Perrine; Falinski, Kim; Sharp, Richard; Auerbach, Daniel A; Sánchez-Canales, María; Dennedy-Frank, P James

    2017-02-15

    Geospatial models are commonly used to quantify sediment contributions at the watershed scale. However, the sensitivity of these models to variation in hydrological and geomorphological features, in particular to land use and topography data, remains uncertain. Here, we assessed the performance of one such model, the InVEST sediment delivery model, for six sites comprising a total of 28 watersheds varying in area (6-13,500km 2 ), climate (tropical, subtropical, mediterranean), topography, and land use/land cover. For each site, we compared uncalibrated and calibrated model predictions with observations and alternative models. We then performed correlation analyses between model outputs and watershed characteristics, followed by sensitivity analyses on the digital elevation model (DEM) resolution. Model performance varied across sites (overall r 2 =0.47), but estimates of the magnitude of specific sediment export were as or more accurate than global models. We found significant correlations between metrics of sediment delivery and watershed characteristics, including erosivity, suggesting that empirical relationships may ultimately be developed for ungauged watersheds. Model sensitivity to DEM resolution varied across and within sites, but did not correlate with other observed watershed variables. These results were corroborated by sensitivity analyses performed on synthetic watersheds ranging in mean slope and DEM resolution. Our study provides modelers using InVEST or similar geospatial sediment models with practical insights into model behavior and structural uncertainty: first, comparison of model predictions across regions is possible when environmental conditions differ significantly; second, local knowledge on the sediment budget is needed for calibration; and third, model outputs often show significant sensitivity to DEM resolution. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    EPA Science Inventory

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop + Pesticide model. Simulations are performed of hive population trajectories with and without pesti...

  5. Transport calculations and sensitivity analyses for air-over-ground and air-over-seawater weapons environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pace, J.V. III; Bartine, D.E.; Mynatt, F.R.

    1976-01-01

    Two-dimensional neutron and secondary gamma-ray transport calculations and cross-section sensitivity analyses have been performed to determine the effects of varying source heights and cross sections on calculated doses. The air-over-ground calculations demonstrate the existence of an optimal height of burst for a specific ground range and indicate under what conditions they are conservative with respect to infinite air calculations. The air-over-seawater calculations showed the importance of hydrogen and chlorine in gamma production. Additional sensitivity analyses indicated the importance of water in the ground, the amount of reduction in ground thickness for calculational purposes, and the effect of the degree ofmore » Legendre angular expansion of the scattering cross-sections (P/sub l/) on the calculated dose.« less

  6. Effectiveness of Light Sources on In-Office Dental Bleaching: A Systematic Review and Meta-Analyses.

    PubMed

    SoutoMaior, J R; de Moraes, Sld; Lemos, Caa; Vasconcelos, Bc do E; Montes, Majr; Pellizzer, E P

    2018-06-12

    A systematic review and meta-analyses were performed to evaluate the efficacy of tooth color change and sensitivity of teeth following in-office bleaching with and without light gel activation in adult patients. This review was registered at PROSPERO (CRD 42017060574) and is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Electronic systematic searches of PubMed/MEDLINE, Web of Science, and the Cochrane Library were conducted for published articles. Only randomized clinical trials among adults that compared in-office bleaching with and without light activation with the same bleaching gel concentrations were selected. The outcomes were tooth color change and tooth sensitivity prevalence and intensity. Twenty-three articles from 1054 data sources met the eligibility criteria. After title and abstract screening, 39 studies remained. Sixteen studies were further excluded. Twenty-three studies remained for qualitative analyses and 20 for meta-analyses of primary and secondary outcomes. No significant differences in tooth color change or tooth sensitivity incidence were found between the compared groups; however, tooth sensitivity intensity decreased when light sources were applied. The use of light sources for in-office bleaching is not imperative to achieve esthetic clinical results.

  7. Sensitivity analyses for simulating pesticide impacts on honey bee colonies

    USDA-ARS?s Scientific Manuscript database

    We employ Monte Carlo simulation and sensitivity analysis techniques to describe the population dynamics of pesticide exposure to a honey bee colony using the VarroaPop+Pesticide model. Simulations are performed of hive population trajectories with and without pesticide exposure to determine the eff...

  8. Sensitive and selective determination of methylenedioxylated amphetamines by high-performance liquid chromatography with fluorimetric detection.

    PubMed

    Sadeghipour, F; Veuthey, J L

    1997-11-07

    A rapid, sensitive and selective liquid chromatographic method with fluorimetric detection was developed for the separation and quantification of four methylenedioxylated amphetamines without interference of other drugs of abuse and common substances found in illicit tablets. The method was validated by examining linearity, precision and accuracy as well as detection and quantification limits. Methylenedioxylated amphetamines were quantified in eight tablets from illicit drug seizures and results were quantitatively compared to HPLC-UV analyses. To demonstrate the better sensitivity of the fluorimetric detection, methylenedioxylated amphetamines were analyzed in serum after a liquid-liquid extraction procedure and results were also compared to HPLC-UV analyses.

  9. iTOUGH2 v7.1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL

    2016-09-15

    iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less

  10. Tactile sensitivity of gloved hands in the cold operation.

    PubMed

    Geng, Q; Kuklane, K; Holmér, I

    1997-11-01

    In this study, tactile sensitivity of gloved hand in the cold operation has been investigated. The relations among physical properties of protective gloves and hand tactile sensitivity and cold protection were also analysed both objectively and subjectively. Subjects with various gloves participated in the experimental study during cold exposure at different ambient temperatures of -12 degrees C and -25 degrees C. Tactual performance was measured using an identification task with various sizes of objects over the percentage of misjudgment. Forearm, hand and finger skin temperatures were also recorded throughout. The experimental data were analysed using analysis of variance (ANOVA) model and the Tukey's multiple range test. The results obtained indicated that the tactual performance was affected both by gloves and by hands/fingers cooling. Effect of object size on the tactile discrimination was significant and the misjudgment increased when similar sizes of objects were identified, especially at -25 degrees C.

  11. A spectral power analysis of driving behavior changes during the transition from nondistraction to distraction.

    PubMed

    Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R

    2017-11-17

    This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.

  12. Orbit Transfer Vehicle (OTV) engine phase A study

    NASA Technical Reports Server (NTRS)

    Mellish, J. A.

    1978-01-01

    Requirements for the orbit transfer vehicle engine were examined. Engine performance/weight sensitivities, the effect of a service life of 300 start/shutdown cycles between overalls on the maximum engine operating pressure, and the sensitivity of the engine design point (i.e., thrust chamber pressure and nozzle area ratio) to the performance requirements specified are among the factors studied. Preliminary engine systems analyses were conducted on the stage combustion, expander, and gas generator engine cycles. Hydrogen and oxygen pump discharge pressure requirements are shown for various engine cycles. Performance of the engine cycles is compared.

  13. Head-To-Head Comparison Between High- and Standard-b-Value DWI for Detecting Prostate Cancer: A Systematic Review and Meta-Analysis.

    PubMed

    Woo, Sungmin; Suh, Chong Hyun; Kim, Sang Youn; Cho, Jeong Yeon; Kim, Seung Hyup

    2018-01-01

    The purpose of this study was to perform a head-to-head comparison between high-b-value (> 1000 s/mm 2 ) and standard-b-value (800-1000 s/mm 2 ) DWI regarding diagnostic performance in the detection of prostate cancer. The MEDLINE and EMBASE databases were searched up to April 1, 2017. The analysis included diagnostic accuracy studies in which high- and standard-b-value DWI were used for prostate cancer detection with histopathologic examination as the reference standard. Methodologic quality was assessed with the revised Quality Assessment of Diagnostic Accuracy Studies tool. Sensitivity and specificity of all studies were calculated and were pooled and plotted in a hierarchic summary ROC plot. Meta-regression and multiple-subgroup analyses were performed to compare the diagnostic performances of high- and standard-b-value DWI. Eleven studies (789 patients) were included. High-b-value DWI had greater pooled sensitivity (0.80 [95% CI, 0.70-0.87]) (p = 0.03) and specificity (0.92 [95% CI, 0.87-0.95]) (p = 0.01) than standard-b-value DWI (sensitivity, 0.78 [95% CI, 0.66-0.86]); specificity, 0.87 [95% CI, 0.77-0.93] (p < 0.01). Multiple-subgroup analyses showed that specificity was consistently higher for high- than for standard-b-value DWI (p ≤ 0.05). Sensitivity was significantly higher for high- than for standard-b-value DWI only in the following subgroups: peripheral zone only, transition zone only, multiparametric protocol (DWI and T2-weighted imaging), visual assessment of DW images, and per-lesion analysis (p ≤ 0.04). In a head-to-head comparison, high-b-value DWI had significantly better sensitivity and specificity for detection of prostate cancer than did standard-b-value DWI. Multiple-subgroup analyses showed that specificity was consistently superior for high-b-value DWI.

  14. Sunflower seeds as eliciting agents of Compositae dermatitis.

    PubMed

    Paulsen, Evy; El-Houri, Rime B; Andersen, Klaus E; Christensen, Lars P

    2015-03-01

    Sunflowers may cause dermatitis because of allergenic sesquiterpene lactones (SLs). Contact sensitization to sunflower seeds has also been reported, but the allergens are unknown. To analyse sunflower seeds for the presence of SLs and to assess the prevalence of sunflower sensitization in Compositae-allergic individuals. Sunflower-sensitive patients were identified by aimed patch testing. A dichloromethane extract of whole sunflower seeds was analysed by liquid chromatography-mass spectrometry and high-performance liquid chromatography. The prevalence of sensitivity to sunflower in Compositae-allergic individuals was 56%. A solvent wash of whole sunflower seeds yielded an extract containing SLs, the principal component tentatively being identified as argophyllin A or B, other SLs being present in minute amounts. The concentration of SLs on the sunflower seeds is considered high enough to elicit dermatitis in sensitive persons, and it seems appropriate to warn Compositae-allergic subjects against handling sunflower seeds. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for osteoporosis or low bone density

    PubMed Central

    Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.

    2015-01-01

    Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147

  16. Status Report on Scoping Reactor Physics and Sensitivity/Uncertainty Analysis of LR-0 Reactor Molten Salt Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.

    2016-08-31

    Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF 2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. Themore » objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.« less

  17. Evaluation of a Method for Remote Detection of Fuel Relocation Outside the Original Core Volumes of Fukushima Reactor Units 1-3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Douglas W. Akers; Edwin A. Harvego

    2012-08-01

    This paper presents the results of a study to evaluate the feasibility of remotely detecting and quantifying fuel relocation from the core to the lower head, and to regions outside the reactor vessel primary containment of the Fukushima 1-3 reactors. The goals of this study were to determine measurement conditions and requirements, and to perform initial radiation transport sensitivity analyses for several potential measurement locations inside the reactor building. The radiation transport sensitivity analyses were performed based on reactor design information for boiling water reactors (BWRs) similar to the Fukushima reactors, ORIGEN2 analyses of 3-cycle BWR fuel inventories, and datamore » on previously molten fuel characteristics from TMI- 2. A 100 kg mass of previously molten fuel material located on the lower head of the reactor vessel was chosen as a fuel interrogation sensitivity target. Two measurement locations were chosen for the transport analyses, one inside the drywell and one outside the concrete biological shield surrounding the drywell. Results of these initial radiation transport analyses indicate that the 100 kg of previously molten fuel material may be detectable at the measurement location inside the drywell, but that it is highly unlikely that any amount of fuel material inside the RPV will be detectable from a location outside the concrete biological shield surrounding the drywell. Three additional fuel relocation scenarios were also analyzed to assess detection sensitivity for varying amount of relocated material in the lower head of the reactor vessel, in the control rods perpendicular to the detector system, and on the lower head of the drywell. Results of these analyses along with an assessment of background radiation effects and a discussion of measurement issues, such as the detector/collimator design, are included in the paper.« less

  18. Time to angiographic reperfusion in acute ischemic stroke: decision analysis.

    PubMed

    Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H

    2014-12-01

    Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.

  19. Performance comparison of Islamic and commercial banks in Malaysia

    NASA Astrophysics Data System (ADS)

    Azizud-din, Azimah; Hussin, Siti Aida Sheikh; Zahid, Zalina

    2016-10-01

    The steady growth in the size and increase in the number of Islamic banks show that the Islamic banking system is considered as an alternative to the conventional banking system. Due to this, comparisons in term of performance measurements and evaluation of the financial health for both type of banks are essential. The main purpose of this study is to analyse the differences between Islamic and commercial banks performance. Five years secondary data were collected from the annual report for each bank. Return on Asset ratio is chosen as the dependent variable, while capital adequacy, asset quality, management quality, earning, liquidity and sensitivity to market risk (CAMELS) are the independent variables. Descriptive analyses were done to understand the data. The independent t-test and Mann Whitney test show the differences of Islamic and commercial banks based on the financial variables. The stepwise and hierarchical multiple regressions were used to determine the factor that affects profitability performance of banks. Results show that Islamic banks are better in term of profitability performance, earning power performance, liquidity performance and sensitive to market risk. The factors that affect profitability performance are capital adequacy, earning power and liquidity variable.

  20. Development of a standardized battery of performance tests for the assessment of noise stress effects

    NASA Technical Reports Server (NTRS)

    Theologus, G. C.; Wheaton, G. R.; Mirabella, A.; Brahlek, R. E.

    1973-01-01

    A set of 36 relatively independent categories of human performance were identified. These categories encompass human performance in the cognitive, perceptual, and psychomotor areas, and include diagnostic measures and sensitive performance metrics. Then a prototype standardized test battery was constructed, and research was conducted to obtain information on the sensitivity of the tests to stress, the sensitivity of selected categories of performance degradation, the time course of stress effects on each of the selected tests, and the learning curves associated with each test. A research project utilizing a three factor partially repeated analysis of covariance design was conducted in which 60 male subjects were exposed to variations in noise level and quality during performance testing. Effects of randomly intermittent noise on performance of the reaction time tests were observed, but most of the other performance tests showed consistent stability. The results of 14 analyses of covariance of the data taken from the performance of the 60 subjects on the prototype standardized test battery provided information which will enable the final development and test of a standardized test battery and the associated development of differential sensitivity metrics and diagnostic classificatory system.

  1. Vibration isolation technology: Sensitivity of selected classes of space experiments to residual accelerations

    NASA Technical Reports Server (NTRS)

    Alexander, J. Iwan D.; Zhang, Y. Q.; Adebiyi, Adebimpe

    1989-01-01

    Progress performed on each task is described. Order of magnitude analyses related to liquid zone sensitivity and thermo-capillary flow sensitivity are covered. Progress with numerical models of the sensitivity of isothermal liquid zones is described. Progress towards a numerical model of coupled buoyancy-driven and thermo-capillary convection experiments is also described. Interaction with NASA personnel is covered. Results to date are summarized and they are discussed in terms of the predicted space station acceleration environment. Work planned for the second year is also discussed.

  2. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    PubMed

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  3. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  4. Techno-economic evaluation of biodiesel production from waste cooking oil--a case study of Hong Kong.

    PubMed

    Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki

    2015-02-18

    Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices.

  5. Techno-Economic Evaluation of Biodiesel Production from Waste Cooking Oil—A Case Study of Hong Kong

    PubMed Central

    Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki

    2015-01-01

    Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices. PMID:25809602

  6. Personal Costs and Benefits of Employee Intrapreneurship: Disentangling the Employee Intrapreneurship, Well-Being, and Job Performance Relationship.

    PubMed

    Gawke, Jason C; Gorgievski, Marjan J; Bakker, Arnold B

    2017-12-28

    Ample studies have confirmed the benefits of intrapreneurship (i.e., employee behaviors that contribute to new venture creation and strategic renewal activities) for firm performance, but research on the personal costs and benefits of engaging in intrapreneurial activities for employees is lacking. Building on job demands-resources and reinforcement sensitivity theories, we examined how employees' reinforcement sensitivity qualified the relationship among their intrapreneurial behavior, subjective well-being, and other-rated job performance. Using a sample of 241 employee dyads, the results of moderated mediation analyses confirmed that employee intrapreneurship related positively to work engagement for employees high (vs. low) in sensitivity to rewards (behavioral approach system), which subsequently related positively to innovativeness and in-role performance and negatively to work avoidance. In contrast, employee intrapreneurship related positively to exhaustion for employees high (vs. low) in sensitivity to punishments (behavioral inhibition system), which subsequently related positively to work avoidance and negatively to in-role performance (but not to innovativeness). Theoretical and practical implications are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. The march from early life food sensitization to allergic disease: a systematic review and meta-analyses of birth cohort studies.

    PubMed

    Alduraywish, S A; Lodge, C J; Campbell, B; Allen, K J; Erbas, B; Lowe, A J; Dharmage, S C

    2016-01-01

    There is growing evidence for an increase in food allergies. The question of whether early life food sensitization, a primary step in food allergies, leads to other allergic disease is a controversial but important issue. Birth cohorts are an ideal design to answer this question. We aimed to systematically investigate and meta-analyse the evidence for associations between early food sensitization and allergic disease in birth cohorts. MEDLINE and SCOPUS databases were searched for birth cohorts that have investigated the association between food sensitization in the first 2 years and subsequent wheeze/asthma, eczema and/or allergic rhinitis. We performed meta-analyses using random-effects models to obtain pooled estimates, stratified by age group. The search yielded fifteen original articles representing thirteen cohorts. Early life food sensitization was associated with an increased risk of infantile eczema, childhood wheeze/asthma, eczema and allergic rhinitis and young adult asthma. Meta-analyses demonstrated that early life food sensitization is related to an increased risk of wheeze/asthma (pooled OR 2.9; 95% CI 2.0-4.0), eczema (pooled OR 2.7; 95% CI 1.7-4.4) and allergic rhinitis (pooled OR 3.1; 95% CI 1.9-4.9) from 4 to 8 years. Food sensitization in the first 2 years of life can identify children at high risk of subsequent allergic disease who may benefit from early life preventive strategies. However, due to potential residual confounding in the majority of studies combined with lack of follow-up into adolescence and adulthood, further research is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Economic evaluation of home-based telebehavioural health care compared to in-person treatment delivery for depression.

    PubMed

    Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N

    2018-02-01

    Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.

  9. Comparison between two methodologies for urban drainage decision aid.

    PubMed

    Moura, P M; Baptista, M B; Barraud, S

    2006-01-01

    The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.

  10. Search Strategy to Identify Dental Survival Analysis Articles Indexed in MEDLINE.

    PubMed

    Layton, Danielle M; Clarke, Michael

    2016-01-01

    Articles reporting survival outcomes (time-to-event outcomes) in patients over time are challenging to identify in the literature. Research shows the words authors use to describe their dental survival analyses vary, and that allocation of medical subject headings by MEDLINE indexers is inconsistent. Together, this undermines accurate article identification. The present study aims to develop and validate a search strategy to identify dental survival analyses indexed in MEDLINE (Ovid). A gold standard cohort of articles was identified to derive the search terms, and an independent gold standard cohort of articles was identified to test and validate the proposed search strategies. The first cohort included all 6,955 articles published in the 50 dental journals with the highest impact factors in 2008, of which 95 articles were dental survival articles. The second cohort included all 6,514 articles published in the 50 dental journals with the highest impact factors for 2012, of which 148 were dental survival articles. Each cohort was identified by a systematic hand search. Performance parameters of sensitivity, precision, and number needed to read (NNR) for the search strategies were calculated. Sensitive, precise, and optimized search strategies were developed and validated. The performances of the search strategy maximizing sensitivity were 92% sensitivity, 14% precision, and 7.11 NNR; the performances of the strategy maximizing precision were 93% precision, 10% sensitivity, and 1.07 NNR; and the performances of the strategy optimizing the balance between sensitivity and precision were 83% sensitivity, 24% precision, and 4.13 NNR. The methods used to identify search terms were objective, not subjective. The search strategies were validated in an independent group of articles that included different journals and different publication years. Across the three search strategies, dental survival articles can be identified with sensitivity up to 92%, precision up to 93%, and NNR of less than two articles to identify relevant records. This research has highlighted the impact that variation in reporting and indexing has on article identification and has improved researchers' ability to identify dental survival articles.

  11. Should cell-free DNA testing be used to target antenatal rhesus immune globulin administration?

    PubMed

    Ma, Kimberly K; Rodriguez, Maria I; Cheng, Yvonne W; Norton, Mary E; Caughey, Aaron B

    2016-01-01

    To compare the rates of alloimmunization with the use of cell-free DNA (cfDNA) screening to target antenatal rhesus immune globulin (RhIG) prenatally, versus routine administration of RhIG in rhesus D (RhD)-negative pregnant women in a theoretic cohort using a decision-analytic model. A decision-analytic model compared cfDNA testing to routine antenatal RhIG administration. The primary outcome was maternal sensitization to RhD antigen. Sensitivity and specificity of cfDNA testing were assumed to be 99.8% and 95.3%, respectively. Univariate and bivariate sensitivity analyses, Monte Carlo simulation, and threshold analyses were performed. In a cohort of 10,000 RhD-negative women, 22.6 sensitizations would occur with utilization of cfDNA, while 20 sensitizations would occur with routine RhIG. Only when the sensitivity of the cfDNA test reached 100%, the rate of sensitization was equal for both cfDNA and RhIG. Otherwise, routine RhIG minimized the rate of sensitization, especially given RhIG is readily available in the United States. Adoption of cfDNA testing would result in a 13.0% increase in sensitization among RhD-negative women in a theoretical cohort taking into account the ethnic diversity of the United States' population.

  12. Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands

    PubMed Central

    Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven

    2015-01-01

    Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794

  13. Hypoglycemia alarm enhancement using data fusion.

    PubMed

    Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W

    2010-01-01

    The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.

  14. VALOR joint oscillation analysis using multiple LAr-TPCs in the Booster Neutrino Beam at Fermilab

    NASA Astrophysics Data System (ADS)

    Andreopoulos, C.; Barry, C.; Bench, F.; Chappell, A.; Dealtry, T.; Dennis, S.; Escudero, L.; Jones, R.; Grant, N.; Roda, M.; Sgalaberna, D.; Shah, R.

    2017-09-01

    Anomalies observed by different experiments, the most significant ones being the ∼3.8 sigma νe appearance in a ∼50 MeV νµ beam from muon decay at rest observed by the LSND experiment and the ∼3.8 sigma νe and {\\bar{ν }}e appearance in a ∼1 GeV neutrino beam from pion decay in flight observed by MiniBooNE, suggest the existence of sterile neutrinos. The Short Baseline Neutrino (SBN) program at Fermilab aims to perform a sensitive search for sterile neutrinos by performing analyses of νe appearance and νµ disappearance employing three Liquid Argon Time Projection Chambers (LAr-TPCs) at different baselines. The VALOR neutrino fitting group was established within the T2K experiment and has led numerous flagship T2K oscillation analyses, and provided sensitivity and detector optimisation studies for DUNE and Hyper-K. The neutrino oscillation framework developed by this group is able to perform fits of several samples and systematic parameters within different neutrino models and experiments. Thus, VALOR is an ideal environment for the neutrino oscillation fits using multiple LAr-TPC detectors with proper treatment of correlated systematic uncertainties necessary for the SBN analyses.

  15. Validity of self-reported stroke in elderly African Americans, Caribbean Hispanics, and Whites.

    PubMed

    Reitz, Christiane; Schupf, Nicole; Luchsinger, José A; Brickman, Adam M; Manly, Jennifer J; Andrews, Howard; Tang, Ming X; DeCarli, Charles; Brown, Truman R; Mayeux, Richard

    2009-07-01

    The validity of a self-reported stroke remains inconclusive. To validate the diagnosis of self-reported stroke using stroke identified by magnetic resonance imaging (MRI) as the standard. Community-based cohort study of nondemented, ethnically diverse elderly persons in northern Manhattan. High-resolution quantitative MRIs were acquired for 717 participants without dementia. Sensitivity and specificity of stroke by self-report were examined using cross-sectional analyses and the chi(2) test. Putative relationships between factors potentially influencing the reporting of stroke, including memory performance, cognitive function, and vascular risk factors, were assessed using logistic regression models. Subsequently, all analyses were repeated, stratified by age, sex, ethnic group, and level of education. In analyses of the whole sample, sensitivity of stroke self-report for a diagnosis of stroke on MRI was 32.4%, and specificity was 78.9%. In analyses stratified by median age (80.1 years), the validity between reported stroke and detection of stroke on MRI was significantly better in the younger than the older age group (for all vascular territories: sensitivity and specificity, 36.7% and 81.3% vs 27.6% and 26.2%; P = .02). Impaired memory, cognitive skills, or language ability and the presence of hypertension or myocardial infarction were associated with higher rates of false-negative results. Using brain MRI as the standard, specificity and sensitivity of stroke self-report are low. Accuracy of self-report is influenced by age, presence of vascular disease, and cognitive function. In stroke research, sensitive neuroimaging techniques rather than stroke self-report should be used to determine stroke history.

  16. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis.

    PubMed

    Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J

    2016-01-01

    (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.

  17. Sensitivity, Specificity, and Posttest Probability of Parotid Fine-Needle Aspiration: A Systematic Review and Meta-analysis

    PubMed Central

    Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.

    2016-01-01

    Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476

  18. Space Transfer Concepts and Analyses for Exploration Missions. Technical Directive 12: Beamed Power Systems Study

    NASA Technical Reports Server (NTRS)

    Eder, D.

    1992-01-01

    Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.

  19. How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?

    PubMed

    Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J

    2004-01-01

    There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.

  20. System cost performance analysis (study 2.3). Volume 1: Executive summary. [unmanned automated payload programs and program planning

    NASA Technical Reports Server (NTRS)

    Campbell, B. H.

    1974-01-01

    A study is described which was initiated to identify and quantify the interrelationships between and within the performance, safety, cost, and schedule parameters for unmanned, automated payload programs. The result of the investigation was a systems cost/performance model which was implemented as a digital computer program and could be used to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses for mission model and advanced payload studies. Program objectives and results are described briefly.

  1. Individual differences in metacontrast masking regarding sensitivity and response bias.

    PubMed

    Albrecht, Thorsten; Mattler, Uwe

    2012-09-01

    In metacontrast masking target visibility is modulated by the time until a masking stimulus appears. The effect of this temporal delay differs across participants in such a way that individual human observers' performance shows distinguishable types of masking functions which remain largely unchanged for months. Here we examined whether individual differences in masking functions depend on different response criteria in addition to differences in discrimination sensitivity. To this end we reanalyzed previously published data and conducted a new experiment for further data analyses. Our analyses demonstrate that a distinction of masking functions based on the type of masking stimulus is superior to a distinction based on the target-mask congruency. Individually different masking functions are based on individual differences in discrimination sensitivities and in response criteria. Results suggest that individual differences in metacontrast masking result from individually different criterion contents. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Sensitivity and specificity of the American College of Rheumatology 1987 criteria for the diagnosis of rheumatoid arthritis according to disease duration: a systematic literature review and meta-analysis.

    PubMed

    Banal, F; Dougados, M; Combescure, C; Gossec, L

    2009-07-01

    To evaluate the ability of the widely used ACR set of criteria (both list and tree format) to diagnose RA compared with expert opinion according to disease duration. A systematic literature review was conducted in PubMed and Embase databases. All articles reporting the prevalence of RA according to ACR criteria and expert opinion in cohorts of early (<1 year duration) or established (>1 year) arthritis were analysed to calculate the sensitivity and specificity of ACR 1987 criteria against the "gold standard" (expert opinion). A meta-analysis using a summary receiver operating characteristic (SROC) curve was performed and pooled sensitivity and specificity were calculated with confidence intervals. Of 138 publications initially identified, 19 were analysable (total 7438 patients, 3883 RA). In early arthritis, pooled sensitivity and specificity of the ACR set of criteria were 77% (68% to 84%) and 77% (68% to 84%) in the list format versus 80% (72% to 88%) and 33% (24% to 43%) in the tree format. In established arthritis, sensitivity and specificity were respectively 79% (71% to 85%) and 90% (84% to 94%) versus 80% (71% to 85%) and 93% (86% to 97%). The SROC meta-analysis confirmed the statistically significant differences, suggesting that diagnostic performances of ACR list criteria are better in established arthritis. The specificity of ACR 1987 criteria in early RA is low, and these criteria should not be used as diagnostic tools. Sensitivity and specificity in established RA are higher, which reflects their use as classification criteria gold standard.

  3. Performance of two strategies for urgent ANCA and anti-GBM analysis in vasculitis.

    PubMed

    de Joode, Anoek A E; Roozendaal, Caroline; van der Leij, Marcel J; Bungener, Laura B; Sanders, Jan Stephan F; Stegeman, Coen A

    2014-02-01

    In anti-neutrophil cytoplasmic antibodies (ANCA) associated small vessel vasculitis (AAV), rapid testing for ANCA and anti-glomerular basement membrane (GBM) antibodies may be beneficial for therapeutic purpose. We analysed the diagnostic performance of two rapid ANCA and anti-GBM test methods in 260 patients with suspected AAV. Between January 2004 and November 2010, we analysed 260 samples by qualitative Dotblot (Biomedical Diagnostics); retrospective analysis followed with directly coated highly sensitive automated Phadia ELiA and ELiA anti-GBM. Results were related to the final clinical diagnosis and compared with routine capture ELISA. Seventy-four patients had a final diagnosis of AAV (n=62) or anti-GBM disease (n=12). Both Dotblot and ELiA detected all 12 cases of anti-GBM disease; 2 false positive results were found. Dotblot detected ANCA in 56 of 62 AAV patients (sensitivity 90%, NPV 97%), and showed 5 false positives (specificity 97%, PPV 90%). The Phadia ELiA anti-PR3(s) or anti-MPO(s) was positive in 57 of 62 AAV patients (sensitivity 92%, NPV 97%), and had 5 false positives (specificity 97%, PPV 88%). Routine capture ELISA was equally accurate (sensitivity 94%, specificity 97%, PPV 88%, NPV 98%). The Dotblot and Phadia ELiA on anti-GBM, anti-PR3(s) and anti-MPO(s) performed excellently; results were almost identical to routine ELISA. When suspicion of AAV or anti-GBM disease is high and diagnosis is urgently needed, both tests are very powerful for rapid serological diagnosis. Further studies have to confirm the test performances in samples routinely presented for ANCA testing and in follow-up of positive patients. Copyright © 2013 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  4. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.

    PubMed

    Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J

    2016-03-01

    The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data

    PubMed Central

    Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.

    2016-01-01

    Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243

  6. Meta-analysis of diagnostic accuracy studies in mental health

    PubMed Central

    Takwoingi, Yemisi; Riley, Richard D; Deeks, Jonathan J

    2015-01-01

    Objectives To explain methods for data synthesis of evidence from diagnostic test accuracy (DTA) studies, and to illustrate different types of analyses that may be performed in a DTA systematic review. Methods We described properties of meta-analytic methods for quantitative synthesis of evidence. We used a DTA review comparing the accuracy of three screening questionnaires for bipolar disorder to illustrate application of the methods for each type of analysis. Results The discriminatory ability of a test is commonly expressed in terms of sensitivity (proportion of those with the condition who test positive) and specificity (proportion of those without the condition who test negative). There is a trade-off between sensitivity and specificity, as an increasing threshold for defining test positivity will decrease sensitivity and increase specificity. Methods recommended for meta-analysis of DTA studies --such as the bivariate or hierarchical summary receiver operating characteristic (HSROC) model --jointly summarise sensitivity and specificity while taking into account this threshold effect, as well as allowing for between study differences in test performance beyond what would be expected by chance. The bivariate model focuses on estimation of a summary sensitivity and specificity at a common threshold while the HSROC model focuses on the estimation of a summary curve from studies that have used different thresholds. Conclusions Meta-analyses of diagnostic accuracy studies can provide answers to important clinical questions. We hope this article will provide clinicians with sufficient understanding of the terminology and methods to aid interpretation of systematic reviews and facilitate better patient care. PMID:26446042

  7. Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.

    NASA Astrophysics Data System (ADS)

    Maheras, Steven James

    Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.

  8. Beta experiment

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A focused laser doppler velocimeter (LDV) system was developed for the measurement of atmospheric backscatter (beta) from aerosols at infrared wavelengths. A Doppler signal generator was used in mapping the coherent sensitive focal volume of a focused LDV system. System calibration data was analyzed during the flight test activity scheduled for the Beta system. These analyses were performed to determine the acceptability of the Beta measurement system's performance.

  9. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  10. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  11. Heat Shielding Characteristics and Thermostructural Performance of a Superalloy Honeycomb Sandwich Thermal Protection System (TPS)

    NASA Technical Reports Server (NTRS)

    Ko, William L.

    2004-01-01

    Heat-transfer, thermal bending, and mechanical buckling analyses have been performed on a superalloy "honeycomb" thermal protection system (TPS) for future hypersonic flight vehicles. The studies focus on the effect of honeycomb cell geometry on the TPS heat-shielding performance, honeycomb cell wall buckling characteristics, and the effect of boundary conditions on the TPS thermal bending behavior. The results of the study show that the heat-shielding performance of a TPS panel is very sensitive to change in honeycomb core depth, but insensitive to change in honeycomb cell cross-sectional shape. The thermal deformations and thermal stresses in the TPS panel are found to be very sensitive to the edge support conditions. Slight corrugation of the honeycomb cell walls can greatly increase their buckling strength.

  12. A Methodological Review of US Budget-Impact Models for New Drugs.

    PubMed

    Mauskopf, Josephine; Earnshaw, Stephanie

    2016-11-01

    A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.

  13. A sensitivity analysis of regional and small watershed hydrologic models

    NASA Technical Reports Server (NTRS)

    Ambaruch, R.; Salomonson, V. V.; Simmons, J. W.

    1975-01-01

    Continuous simulation models of the hydrologic behavior of watersheds are important tools in several practical applications such as hydroelectric power planning, navigation, and flood control. Several recent studies have addressed the feasibility of using remote earth observations as sources of input data for hydrologic models. The objective of the study reported here was to determine how accurately remotely sensed measurements must be to provide inputs to hydrologic models of watersheds, within the tolerances needed for acceptably accurate synthesis of streamflow by the models. The study objective was achieved by performing a series of sensitivity analyses using continuous simulation models of three watersheds. The sensitivity analysis showed quantitatively how variations in each of 46 model inputs and parameters affect simulation accuracy with respect to five different performance indices.

  14. Sensitive Electroanalysis Using Solid Electrodes.

    ERIC Educational Resources Information Center

    Wang, Joseph

    1982-01-01

    A hydrodynamic modulation voltammetry (HMV) experiment involving use of simple hydrodynamic modulation procedures is described. Competing with time/equipment restrictions of most teaching laboratories (stopped-stirring and stopped-flow volumetry), students perform both batch and flow analyses and are introduced to analytical flow systems and the…

  15. Maximizing sensitivity of the psychomotor vigilance test (PVT) to sleep loss.

    PubMed

    Basner, Mathias; Dinges, David F

    2011-05-01

    The psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects. Repeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD). Controlled laboratory environment. 74 healthy subjects (34 female), aged 22-45 years. TSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects). In a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59-1.94) and PSD (0.88-1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD. Due to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses.

  16. The lymphocyte transformation test for the diagnosis of drug allergy: sensitivity and specificity.

    PubMed

    Nyfeler, B; Pichler, W J

    1997-02-01

    The diagnosis of a drug allergy is mainly based upon a very detailed history and the clinical findings. In addition, several in vitro or in vivo tests can be performed to demonstrate a sensitization to a certain drug. One of the in vitro tests is the lymphocyte transformation test (LTT), which can reveal a sensitization of T-cells by an enhanced proliferative response of peripheral blood mononuclear cells to a certain drug. To evaluate the sensitivity and specificity of the LTT, 923 case histories of patients with suspected drug allergy in whom a LTT was performed were retrospectively analysed. Based on the history and provocation tests, the probability (P) of a drug allergy was estimated to be > 0.9, 0.5-0.9, 0.1-0.5 or < 0.1, and was put in relation to a positive or negative LTT. Seventy-eight of 100 patients with a very likely drug allergy (P > 0.9) had a positive LTT, which indicates a sensitivity of 78%. If allergies to betalactam-antibiotics were analysed separately, the sensitivity was 74.4%. Fifteen of 102 patients where a classical drug allergy could be excluded (P < 0.1), had nevertheless a positive LTT (specificity thus 85%). The majority of these cases were classified as so-called pseudo-allergic reaction to NSAIDs. Patients with a clear history and clinical findings for a cotrimoxazole-related allergy, all had a positive LTT (6/6), and in patients who reacted to drugs containing proteins, sensitization could be demonstrated as well (i.e. hen's egg lysozyme, 7/7). In 632 of the 923 cases, skin tests were also performed (scratch and/or epicutaneous), for which we found a lower sensitivity than for the LTT (64%), while the specificity was the same (85%). Although our data are somewhat biased by the high number of penicillin allergies and cannot be generalized to drug allergies caused by other compounds, we conclude that the LTT is a useful diagnostic test in drug allergies, able to support the diagnosis of a drug allergy and to pinpoint the relevant drug.

  17. Sex-specific thermal sensitivities of performance and activity in the asian house gecko, Hemidactylus frenatus.

    PubMed

    Cameron, Skye F; Wheatley, Rebecca; Wilson, Robbie S

    2018-07-01

    Studies of sexual selection primarily focus on morphological traits such as body size and secondary trait dimorphism, with less attention been given to the functional differences between the sexes and even more so their thermal performance capacities. Each sex may benefit from possessing different thermal performance capacities that would allow them to maximise their fitness relative to their different reproductive roles; especially when performances are closely related to reproductive success. Here, we examine sexual divergence in thermal sensitivities of performance across three populations of the Asian house gecko (Hemidactylus frenatus) over an extensive latitudinal cline. Using analyses of the thermal sensitivity of routine activity, bite force and sprint speed, we explored whether: (i) males and females differed in their optimal temperatures for performance, (ii) the sexes differed in their thermal sensitivities of performance, and (iii) the degree of sexual divergence in thermal sensitivity varied among the populations. Because male H. frenatus are highly aggressive and frequently engage in combat to gain territories and mating opportunities, we expected males would be active over a wider range of temperatures than females and this would favour broad thermal sensitivity curves for males. In addition, we expected a greater divergence between the sexes in thermal sensitivities for the temperate populations that experience greater daily and seasonal thermal variation. We found that males were more active, and had greater bite forces and faster sprint speeds than females, independent of body size. In addition, we found differences between the sexes in thermal sensitivities for the tropical population; female H. frenatus were less active and possessed lower sprint speeds at higher temperatures than males. Although H. frenatus from the most variable thermal environments also displayed the broadest thermal performance range, it was the more stable tropical population that exhibited the greatest divergence between the sexes in thermal sensitivity of performance. The divergence in thermal physiology that we detected between the sexes of H. frenatus is consistent with the idea that males will derive mating and territorial advantages for maintaining function over a broader range of temperatures.

  18. Computational Modelling and Optimal Control of Ebola Virus Disease with non-Linear Incidence Rate

    NASA Astrophysics Data System (ADS)

    Takaidza, I.; Makinde, O. D.; Okosun, O. K.

    2017-03-01

    The 2014 Ebola outbreak in West Africa has exposed the need to connect modellers and those with relevant data as pivotal to better understanding of how the disease spreads and quantifying the effects of possible interventions. In this paper, we model and analyse the Ebola virus disease with non-linear incidence rate. The epidemic model created is used to describe how the Ebola virus could potentially evolve in a population. We perform an uncertainty analysis of the basic reproductive number R 0 to quantify its sensitivity to other disease-related parameters. We also analyse the sensitivity of the final epidemic size to the time control interventions (education, vaccination, quarantine and safe handling) and provide the cost effective combination of the interventions.

  19. Probabilistic sensitivity analysis incorporating the bootstrap: an example comparing treatments for the eradication of Helicobacter pylori.

    PubMed

    Pasta, D J; Taylor, J L; Henning, J M

    1999-01-01

    Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.

  20. Power and sensitivity of alternative fit indices in tests of measurement invariance.

    PubMed

    Meade, Adam W; Johnson, Emily C; Braddy, Phillip W

    2008-05-01

    Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.

  1. Sensitivity and specificity of subacute computerized neurocognitive testing and symptom evaluation in predicting outcomes after sports-related concussion.

    PubMed

    Lau, Brian C; Collins, Michael W; Lovell, Mark R

    2011-06-01

    Concussions affect an estimated 136 000 high school athletes yearly. Computerized neurocognitive testing has been shown to be appropriately sensitive and specific in diagnosing concussions, but no studies have assessed its utility to predict length of recovery. Determining prognosis during subacute recovery after sports concussion will help clinicians more confidently address return-to-play and academic decisions. To quantify the prognostic ability of computerized neurocognitive testing in combination with symptoms during the subacute recovery phase from sports-related concussion. Cohort study (prognosis); Level of evidence, 2. In sum, 108 male high school football athletes completed a computer-based neurocognitive test battery within 2.23 days of injury and were followed until returned to play as set by international guidelines. Athletes were grouped into protracted recovery (>14 days; n = 50) or short-recovery (≤14 days; n = 58). Separate discriminant function analyses were performed using total symptom score on Post-Concussion Symptom Scale, symptom clusters (migraine, cognitive, sleep, neuropsychiatric), and Immediate Postconcussion Assessment and Cognitive Testing neurocognitive scores (verbal memory, visual memory, reaction time, processing speed). Multiple discriminant function analyses revealed that the combination of 4 symptom clusters and 4 neurocognitive composite scores had the highest sensitivity (65.22%), specificity (80.36%), positive predictive value (73.17%), and negative predictive value (73.80%) in predicting protracted recovery. Discriminant function analyses of total symptoms on the Post-Concussion Symptom Scale alone had a sensitivity of 40.81%; specificity, 79.31%; positive predictive value, 62.50%; and negative predictive value, 61.33%. The 4 symptom clusters alone discriminant function analyses had a sensitivity of 46.94%; specificity, 77.20%; positive predictive value, 63.90%; and negative predictive value, 62.86%. Discriminant function analyses of the 4 computerized neurocognitive scores alone had a sensitivity of 53.20%; specificity, 75.44%; positive predictive value, 64.10%; and negative predictive value, 66.15%. The use of computerized neurocognitive testing in conjunction with symptom clusters results improves sensitivity, specificity, positive predictive value, and negative predictive value of predicting protracted recovery compared with each used alone. There is also a net increase in sensitivity of 24.41% when using neurocognitive testing and symptom clusters together compared with using total symptoms on Post-Concussion Symptom Scale alone.

  2. Difficult Decisions Made Easier

    NASA Technical Reports Server (NTRS)

    2006-01-01

    NASA missions are extremely complex and prone to sudden, catastrophic failure if equipment falters or if an unforeseen event occurs. For these reasons, NASA trains to expect the unexpected. It tests its equipment and systems in extreme conditions, and it develops risk-analysis tests to foresee any possible problems. The Space Agency recently worked with an industry partner to develop reliability analysis software capable of modeling complex, highly dynamic systems, taking into account variations in input parameters and the evolution of the system over the course of a mission. The goal of this research was multifold. It included performance and risk analyses of complex, multiphase missions, like the insertion of the Mars Reconnaissance Orbiter; reliability analyses of systems with redundant and/or repairable components; optimization analyses of system configurations with respect to cost and reliability; and sensitivity analyses to identify optimal areas for uncertainty reduction or performance enhancement.

  3. Maximizing Sensitivity of the Psychomotor Vigilance Test (PVT) to Sleep Loss

    PubMed Central

    Basner, Mathias; Dinges, David F.

    2011-01-01

    Study Objectives: The psychomotor vigilance test (PVT) is among the most widely used measures of behavioral alertness, but there is large variation among published studies in PVT performance outcomes and test durations. To promote standardization of the PVT and increase its sensitivity and specificity to sleep loss, we determined PVT metrics and task durations that optimally discriminated sleep deprived subjects from alert subjects. Design: Repeated-measures experiments involving 10-min PVT assessments every 2 h across both acute total sleep deprivation (TSD) and 5 days of chronic partial sleep deprivation (PSD). Setting: Controlled laboratory environment. Participants: 74 healthy subjects (34 female), aged 22–45 years. Interventions: TSD experiment involving 33 h awake (N = 31 subjects) and a PSD experiment involving 5 nights of 4 h time in bed (N = 43 subjects). Measurements and Results: In a paired t-test paradigm and for both TSD and PSD, effect sizes of 10 different PVT performance outcomes were calculated. Effect sizes were high for both TSD (1.59–1.94) and PSD (0.88–1.21) for PVT metrics related to lapses and to measures of psychomotor speed, i.e., mean 1/RT (response time) and mean slowest 10% 1/RT. In contrast, PVT mean and median RT outcomes scored low to moderate effect sizes influenced by extreme values. Analyses facilitating only portions of the full 10-min PVT indicated that for some outcomes, high effect sizes could be achieved with PVT durations considerably shorter than 10 min, although metrics involving lapses seemed to profit from longer test durations in TSD. Conclusions: Due to their superior conceptual and statistical properties and high sensitivity to sleep deprivation, metrics involving response speed and lapses should be considered primary outcomes for the 10-min PVT. In contrast, PVT mean and median metrics, which are among the most widely used outcomes, should be avoided as primary measures of alertness. Our analyses also suggest that some shorter-duration PVT versions may be sensitive to sleep loss, depending on the outcome variable selected, although this will need to be confirmed in comparative analyses of separate duration versions of the PVT. Using both sensitive PVT metrics and optimal test durations maximizes the sensitivity of the PVT to sleep loss and therefore potentially decreases the sample size needed to detect the same neurobehavioral deficit. We propose criteria to better standardize the 10-min PVT and facilitate between-study comparisons and meta-analyses. Citation: Basner M; Dinges DF. Maximizing sensitivity of the psychomotor vigilance test (PVT) to sleep loss. SLEEP 2011;34(5):581-591. PMID:21532951

  4. MATERNAL SENSITIVITY AND PERFORMANCE AND VERBAL INTELLIGENCE IN LATE CHILDHOOD AND ADOLESCENCE.

    PubMed

    Dunkel, Curtis S; Woodley Of Menie, Michael A

    2018-02-15

    The aim of this study was to investigate the association between maternal sensitivity and offspring intelligence in late childhood and adolescence. Secondary data (N=117) from the Block and Block (2006a, b) 30-year longitudinal study of Californian children, which began in the late 1960s, was used to test the hypothesis that maternal sensitivity in childhood would be predictive of late childhood and adolescent intelligence. Correlational analyses revealed that maternal sensitivity, as judged by raters viewing mother's interactions with their children in a set of four joint structured cognitive tasks when the child was 5 years of age, was associated with verbal and performance IQ test scores when the children were ages 11 and 18. Using hierarchical regression to control for child sex, socioeconomic status, child temperament, child baseline IQ (as measured at age 4), mother's level of education and mother's emotional nurturance, it was found that the maternal sensitivity and child and adolescent IQ association held for verbal, but not performance IQ. Furthermore, a pattern emerged in which the association between maternal sensitivity and verbal IQ was stronger for adolescents with a lower baseline IQ. The results suggest that maternal sensitivity is associated with offspring verbal intelligence and that this association holds when numerous variables are accounted for. Additionally, this association may be stronger for children with lower IQs.

  5. Evaluation of microarray data normalization procedures using spike-in experiments

    PubMed Central

    Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders

    2006-01-01

    Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679

  6. Robust artificial neural network for reliability and sensitivity analyses of complex non-linear systems.

    PubMed

    Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo

    2017-12-01

    Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Breast cancer screening effect across breast density strata: A case-control study.

    PubMed

    van der Waal, Daniëlle; Ripping, Theodora M; Verbeek, André L M; Broeders, Mireille J M

    2017-01-01

    Breast cancer screening is known to reduce breast cancer mortality. A high breast density may affect this reduction. We assessed the effect of screening on breast cancer mortality in women with dense and fatty breasts separately. Analyses were performed within the Nijmegen (Dutch) screening programme (1975-2008), which invites women (aged 50-74 years) biennially. Performance measures were determined. Furthermore, a case-control study was performed for women having dense and women having fatty breasts. Breast density was assessed visually with a dichotomized Wolfe scale. Breast density data were available for cases. The prevalence of dense breasts among controls was estimated with age-specific rates from the general population. Sensitivity analyses were performed on these estimates. Screening performance was better in the fatty than in the dense group (sensitivity 75.7% vs 57.8%). The mortality reduction appeared to be smaller for women with dense breasts, with an odds ratio (OR) of 0.87 (95% CI 0.52-1.45) in the dense and 0.59 (95% CI 0.44-0.79) in the fatty group. We can conclude that high density results in lower screening performance and appears to be associated with a smaller mortality reduction. Breast density is thus a likely candidate for risk-stratified screening. More research is needed on the association between density and screening harms. © 2016 UICC.

  8. Space Station Freedom electric power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.

    1990-01-01

    The results are detailed of follow-on availability analyses performed on the Space Station Freedom electric power system (EPS). The scope includes analyses of several EPS design variations, these are: the 4-photovoltaic (PV) module baseline EPS design, a 6-PV module EPS design, and a 3-solar dynamic module EPS design which included a 10 kW PV module. The analyses performed included: determining the discrete power levels that the EPS will operate at upon various component failures and the availability of each of these operating states; ranking EPS components by the relative contribution each component type gives to the power availability of the EPS; determining the availability impacts of including structural and long-life EPS components in the availability models used in the analyses; determining optimum sparing strategies, for storing space EPS components on-orbit, to maintain high average-power-capability with low lift-mass requirements; and analyses to determine the sensitivity of EPS-availability to uncertainties in the component reliability and maintainability data used.

  9. Autonomous Mars ascent and orbit rendezvous for earth return missions

    NASA Technical Reports Server (NTRS)

    Edwards, H. C.; Balmanno, W. F.; Cruz, Manuel I.; Ilgen, Marc R.

    1991-01-01

    The details of tha assessment of autonomous Mars ascent and orbit rendezvous for earth return missions are presented. Analyses addressing navigation system assessments, trajectory planning, targeting approaches, flight control guidance strategies, and performance sensitivities are included. Tradeoffs in the analysis and design process are discussed.

  10. Statistical Performances of Resistive Active Power Splitter

    NASA Astrophysics Data System (ADS)

    Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul

    2016-03-01

    In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.

  11. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  12. Comparison of real-time PCR methods for the detection of Naegleria fowleri in surface water and sediment.

    PubMed

    Streby, Ashleigh; Mull, Bonnie J; Levy, Karen; Hill, Vincent R

    2015-05-01

    Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Foursuch assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices.

  13. Comparison of real-time PCR methods for the detection of Naegleria fowleri in surface water and sediment

    PubMed Central

    Streby, Ashleigh; Mull, Bonnie J.; Levy, Karen

    2015-01-01

    Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Four such assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices. PMID:25855343

  14. Low-order modelling of shallow water equations for sensitivity analysis using proper orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Zokagoa, Jean-Marie; Soulaïmani, Azzeddine

    2012-06-01

    This article presents a reduced-order model (ROM) of the shallow water equations (SWEs) for use in sensitivity analyses and Monte-Carlo type applications. Since, in the real world, some of the physical parameters and initial conditions embedded in free-surface flow problems are difficult to calibrate accurately in practice, the results from numerical hydraulic models are almost always corrupted with uncertainties. The main objective of this work is to derive a ROM that ensures appreciable accuracy and a considerable acceleration in the calculations so that it can be used as a surrogate model for stochastic and sensitivity analyses in real free-surface flow problems. The ROM is derived using the proper orthogonal decomposition (POD) method coupled with Galerkin projections of the SWEs, which are discretised through a finite-volume method. The main difficulty of deriving an efficient ROM is the treatment of the nonlinearities involved in SWEs. Suitable approximations that provide rapid online computations of the nonlinear terms are proposed. The proposed ROM is applied to the simulation of hypothetical flood flows in the Bordeaux breakwater, a portion of the 'Rivière des Prairies' located near Laval (a suburb of Montreal, Quebec). A series of sensitivity analyses are performed by varying the Manning roughness coefficient and the inflow discharge. The results are satisfactorily compared to those obtained by the full-order finite volume model.

  15. High-performance liquid chromatographic determination of ambroxol in human plasma.

    PubMed

    Nobilis, M; Pastera, J; Svoboda, D; Kvêtina, J; Macek, K

    1992-10-23

    Ambroxol has been determined in biological fluids using a rapid and sensitive high-performance liquid chromatographic method. The samples prepared from plasma by liquid-liquid extraction were analysed on reversed-phase silica gel by competing-ion chromatography with ultraviolet detection. The method was applied to the determination of ambroxol levels in twelve healthy volunteers after oral administration of 90 mg of ambroxol in tablets of Mucosolvan and Ambrosan.

  16. Sensitivity analysis to assess the influence of the inertial properties of railway vehicle bodies on the vehicle's dynamic behaviour

    NASA Astrophysics Data System (ADS)

    Suarez, Berta; Felez, Jesus; Maroto, Joaquin; Rodriguez, Pablo

    2013-02-01

    A sensitivity analysis has been performed to assess the influence of the inertial properties of railway vehicles on their dynamic behaviour. To do this, 216 dynamic simulations were performed modifying, one at a time, the masses, moments of inertia and heights of the centre of gravity of the carbody, the bogie and the wheelset. Three values were assigned to each parameter, corresponding to the percentiles 10, 50 and 90 of a data set stored in a database of railway vehicles. After processing the results of these simulations, the analysed parameters were sorted by increasing influence. It was also found which of these parameters could be estimated with a lesser degree of accuracy for future simulations without appreciably affecting the simulation results. In general terms, it was concluded that the most sensitive inertial properties are the mass and the vertical moment of inertia, and the least sensitive ones the longitudinal and lateral moments of inertia.

  17. Simulator study of conventional general aviation instrument displays in path-following tasks with emphasis on pilot-induced oscillations

    NASA Technical Reports Server (NTRS)

    Adams, J. J.

    1980-01-01

    A study of the use of conventional general aviation instruments by general aviation pilots in a six degree of freedom, fixed base simulator was conducted. The tasks performed were tracking a VOR radial and making an ILS approach to landing. A special feature of the tests was that the sensitivity of the displacement indicating instruments (the RMI, CDI, and HSI) was kept constant at values corresponding to 5 n. mi. and 1.25 n. mi. from the station. Both statistical and pilot model analyses of the data were made. The results show that performance in path following improved with increases in display sensitivity up to the highest sensitivity tested. At this maximum test sensitivity, which corresponds to the sensitivity existing at 1.25 n. mi. for the ILS glide slope transmitter, tracking accuracy was no better than it was at 5 n. mi. from the station and the pilot aircraft system exhibited a marked reduction in damping. In some cases, a pilot induced, long period unstable oscillation occurred.

  18. Transportation systems analyses. Volume 2: Technical/programmatics

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This report documents the three principal transportation systems analyses (TSA) efforts during the period 7 November 92 - 6 May 93. The analyses are as follows: Mixed-Fleet (STS/ELV) strategies for SSF resupply; Transportation Systems Data Book - overview; and Operations Cost Model - overview/introduction.

  19. Reprint of: Relationship between cataract severity and socioeconomic status.

    PubMed

    Wesolosky, Jason D; Rudnisky, Christopher J

    2015-06-01

    To determine the relationship between cataract severity and socioeconomic status (SES). Retrospective, observational case series. A total of 1350 eyes underwent phacoemulsification cataract extraction by a single surgeon using an Alcon Infiniti system. Cataract severity was measured using phaco time in seconds. SES was measured using area-level aggregate census data: median income, education, proportion of common-law couples, and employment rate. Preoperative best corrected visual acuity was obtained and converted to logarithm of the minimum angle of resolution values. For patients undergoing bilateral surgery, the generalized estimating equation was used to account for the correlation between eyes. Univariate analyses were performed using simple regression, and multivariate analyses were performed to account for variables with significant relationships (p < 0.05) on univariate testing. Sensitivity analyses were performed to assess the effect of including patient age in the controlled analyses. Multivariate analyses demonstrated that cataracts were more severe when the median income was lower (p = 0.001) and the proportion of common-law couples living in a patient's community (p = 0.012) and the unemployment rate (p = 0.002) were higher. These associations persisted even when controlling for patient age. Patients of lower SES have more severe cataracts. Copyright © 2015. Published by Elsevier Inc.

  20. Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide

    DOE PAGES

    Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...

    2017-03-01

    The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less

  1. The Impact of Array Detectors on Raman Spectroscopy

    ERIC Educational Resources Information Center

    Denson, Stephen C.; Pommier, Carolyn J. S.; Denton, M. Bonner

    2007-01-01

    The impact of array detectors in the field of Raman spectroscopy and all low-light-level spectroscopic techniques is examined. The high sensitivity of array detectors has allowed Raman spectroscopy to be used to detect compounds at part per million concentrations and to perform Raman analyses at advantageous wavelengths.

  2. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    PubMed

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  3. Application of Chemometric Methods to Devolve Co-Eluting Peaks in GC-MS of Fuels to Improve Compound Identification: Final Report

    DTIC Science & Technology

    2018-02-12

    Unclassified Unlimited 49 Jeffrey Cramer (202) 404-3419 Fuel stability and performance problems are often due to the presence of trace levels of contaminants or...other minor changes in composition. Detailed compositional analyses of suspect fuels are often critical to the determination of the cause(s) of the...problem(s) at hand. Sensitive methods to compare fuel compositions via GC-MS methods are available, but the detailed compositional analyses of

  4. A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.

    PubMed

    Gupta, Omesh P; Brown, Gary C; Brown, Melissa M

    2008-05-01

    To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.

  5. Experimental study on cross-sensitivity of temperature and vibration of embedded fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Chen, Tao; Ye, Meng-li; Liu, Shu-liang; Deng, Yan

    2018-03-01

    In view of the principle for occurrence of cross-sensitivity, a series of calibration experiments are carried out to solve the cross-sensitivity problem of embedded fiber Bragg gratings (FBGs) using the reference grating method. Moreover, an ultrasonic-vibration-assisted grinding (UVAG) model is established, and finite element analysis (FEA) is carried out under the monitoring environment of embedded temperature measurement system. In addition, the related temperature acquisition tests are set in accordance with requirements of the reference grating method. Finally, comparative analyses of the simulation and experimental results are performed, and it may be concluded that the reference grating method may be utilized to effectively solve the cross-sensitivity of embedded FBGs.

  6. Systematic review with meta-analysis: the effects of rifaximin in hepatic encephalopathy.

    PubMed

    Kimer, N; Krag, A; Møller, S; Bendtsen, F; Gluud, L L

    2014-07-01

    Rifaximin is recommended for prevention of hepatic encephalopathy (HE). The effects of rifaximin on overt and minimal HE are debated. To perform a systematic review and meta-analysis of randomised controlled trials (RCTs) on rifaximin for HE. We performed electronic and manual searches, gathered information from the U.S. Food and Drug Administration Home Page, and obtained unpublished information on trial design and outcome measures from authors and pharmaceutical companies. Meta-analyses were performed and results presented as risk ratios (RR) with 95% confidence intervals (CI) and the number needed to treat. Subgroup, sensitivity, regression and sequential analyses were performed to evaluate the risk of bias and sources of heterogeneity. We included 19 RCTs with 1370 patients. Outcomes were recalculated based on unpublished information of 11 trials. Overall, rifaximin had a beneficial effect on secondary prevention of HE (RR: 1.32; 95% CI 1.06-1.65), but not in a sensitivity analysis on rifaximin after TIPSS (RR: 1.27; 95% CI 1.00-1.53). Rifaximin increased the proportion of patients who recovered from HE (RR: 0.59; 95% CI: 0.46-0.76) and reduced mortality (RR: 0.68, 95% CI 0.48-0.97). The results were robust to adjustments for bias control. No small study effects were identified. The sequential analyses only confirmed the results of the analysis on HE recovery. Rifaximin has a beneficial effect on hepatic encephalopathy and may reduce mortality. The combined evidence suggests that rifaximin may be considered in the evidence-based management of hepatic encephalopathy. © 2014 John Wiley & Sons Ltd.

  7. Responding to Nonwords in the Lexical Decision Task: Insights from the English Lexicon Project

    PubMed Central

    Yap, Melvin J.; Sibley, Daragh E.; Balota, David A.; Ratcliff, Roger; Rueckl, Jay

    2014-01-01

    Researchers have extensively documented how various statistical properties of words (e.g., word-frequency) influence lexical processing. However, the impact of lexical variables on nonword decision-making performance is less clear. This gap is surprising, since a better specification of the mechanisms driving nonword responses may provide valuable insights into early lexical processes. In the present study, item-level and participant-level analyses were conducted on the trial-level lexical decision data for almost 37,000 nonwords in the English Lexicon Project in order to identify the influence of different psycholinguistic variables on nonword lexical decision performance, and to explore individual differences in how participants respond to nonwords. Item-level regression analyses reveal that nonword response time was positively correlated with number of letters, number of orthographic neighbors, number of affixes, and baseword number of syllables, and negatively correlated with Levenshtein orthographic distance and baseword frequency. Participant-level analyses also point to within- and between-session stability in nonword responses across distinct sets of items, and intriguingly reveal that higher vocabulary knowledge is associated with less sensitivity to some dimensions (e.g., number of letters) but more sensitivity to others (e.g., baseword frequency). The present findings provide well-specified and interesting new constraints for informing models of word recognition and lexical decision. PMID:25329078

  8. Impact of production strategies and animal performance on economic values of dairy sheep traits.

    PubMed

    Krupová, Z; Wolfová, M; Krupa, E; Oravcová, M; Daňo, J; Huba, J; Polák, P

    2012-03-01

    The objective of this study was to carry out a sensitivity analysis on the impact of various production strategies and performance levels on the relative economic values (REVs) of traits in dairy sheep. A bio-economic model implemented in the program package ECOWEIGHT was used to simulate the profit function for a semi-extensive production system with the Slovak multi-purpose breed Improved Valachian and to calculate the REV of 14 production and functional traits. The following production strategies were analysed: differing proportions of milk processed to cheese, customary weaning and early weaning of lambs with immediate sale or sale after artificial rearing, seasonal lambing in winter and aseasonal lambing in autumn. Results of the sensitivity analysis are presented in detail for the four economically most important traits: 150 days milk yield, conception rate of ewes, litter size and ewe productive lifetime. Impacts of the differences in the mean value of each of these four traits on REVs of all other traits were also examined. Simulated changes in the production circumstances had a higher impact on the REV for milk yield than on REVs of the other traits investigated. The proportion of milk processed to cheese, weaning management strategy for lambs and level of milk yield were the main factors influencing the REV of milk yield. The REVs for conception rate of ewes were highly sensitive to the current mean level of the trait. The REV of ewe productive lifetime was most sensitive to variation in ewe conception rate, and the REV of litter size was most affected by weaning strategy for lambs. On the basis of the results of sensitivity analyses, it is recommended that economic values of traits for the overall breeding objective for dairy sheep be calculated as the weighted average of the economic values obtained for the most common production strategies of Slovak dairy sheep farms and that economic values be adjusted after substantial changes in performance levels of the traits.

  9. Understanding the science-learning environment: A genetically sensitive approach.

    PubMed

    Haworth, Claire M A; Davis, Oliver S P; Hanscombe, Ken B; Kovas, Yulia; Dale, Philip S; Plomin, Robert

    2013-02-01

    Previous studies have shown that environmental influences on school science performance increase in importance from primary to secondary school. Here we assess for the first time the relationship between the science-learning environment and science performance using a genetically sensitive approach to investigate the aetiology of this link. 3000 pairs of 14-year-old twins from the UK Twins Early Development Study reported on their experiences of the science-learning environment and were assessed for their performance in science using a web-based test of scientific enquiry. Multivariate twin analyses were used to investigate the genetic and environmental links between environment and outcome. The most surprising result was that the science-learning environment was almost as heritable (43%) as performance on the science test (50%), and showed negligible shared environmental influence (3%). Genetic links explained most (56%) of the association between learning environment and science outcome, indicating gene-environment correlation.

  10. Subcortical volumetric changes across the adult lifespan: subregional thalamic atrophy accounts for age-related sensorimotor performance declines.

    PubMed

    Serbruyns, Leen; Leunissen, Inge; Huysmans, Toon; Cuypers, Koen; Meesen, Raf L; van Ruitenbeek, Peter; Sijbers, Jan; Swinnen, Stephan P

    2015-04-01

    Even though declines in sensorimotor performance during healthy aging have been documented extensively, its underlying neural mechanisms remain unclear. Here, we explored whether age-related subcortical atrophy plays a role in sensorimotor performance declines, and particularly during bimanual manipulative performance (Purdue Pegboard Test). The thalamus, putamen, caudate and pallidum of 91 participants across the adult lifespan (ages 20-79 years) were automatically segmented. In addition to studying age-related changes in the global volume of each subcortical structure, local deformations within these structures, indicative of subregional volume changes, were assessed by means of recently developed shape analyses. Results showed widespread age-related global and subregional atrophy, as well as some notable subregional expansion. Even though global atrophy failed to explain the observed performance declines with aging, shape analyses indicated that atrophy in left and right thalamic subregions, specifically subserving connectivity with the premotor, primary motor and somatosensory cortical areas, mediated the relation between aging and performance decline. It is concluded that subregional volume assessment by means of shape analyses offers a sensitive tool with high anatomical resolution in the search for specific age-related associations between brain structure and behavior. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Multianalyte biosensor based on pH-sensitive ZnO electrolyte–insulator–semiconductor structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haur Kao, Chyuan; Chun Liu, Che; Ueng, Herng-Yih

    2014-05-14

    Multianalyte electrolyte–insulator–semiconductor (EIS) sensors with a ZnO sensing membrane annealed on silicon substrate for use in pH sensing were fabricated. Material analyses were conducted using X-ray diffraction and atomic force microscopy to identify optimal treatment conditions. Sensing performance for various ions of Na{sup +}, K{sup +}, urea, and glucose was also tested. Results indicate that an EIS sensor with a ZnO membrane annealed at 600 °C exhibited good performance with high sensitivity and a low drift rate compared with all other reported ZnO-based pH sensors. Furthermore, based on well-established pH sensing properties, pH-ion-sensitive field-effect transistor sensors have also been developed formore » use in detecting urea and glucose ions. ZnO-based EIS sensors show promise for future industrial biosensing applications.« less

  12. Differences in taste sensitivity between obese and non-obese children and adolescents.

    PubMed

    Overberg, Johanna; Hummel, Thomas; Krude, Heiko; Wiegand, Susanna

    2012-12-01

    Taste sensitivity varies between individuals. Several studies describe differences between obese and non-obese subjects concerning their taste perception. However, data are partly contradictory and insufficient. Therefore, in this study taste sensitivity of obese and non-obese children/adolescents was analysed. In a cross-sectional study gustatory sensitivity of n=99 obese subjects (body mass index (BMI) >97th percentile) and n=94 normal weight subjects (BMI <90th percentile), 6-18 years of age, was compared. Sensitivity for the taste qualities sweet, sour, salty, umami and bitter was analysed by means of impregnated 'taste strips' in different concentrations. A total score was determined for all taste qualities combined as well as for each separately. Furthermore, the possible influence of sex, age and ethnicity on taste perception was analysed. An intensity rating for sweet was performed on a 5-point rating scale. Obese subjects showed-compared to the control group-a significantly lower ability to identify the correct taste qualities regarding the total score (p<0.001). Regarding individual taste qualities there was a significantly lower detection rate for salty, umami and bitter by obese subjects. Furthermore, the determinants age and sex had a significant influence on taste perception: older age and female sex was associated with better ability to identify taste qualities. Concerning the sweet intensity rating obese children gave significantly lower intensity ratings to three of the four concentrations. Obese and non-obese children and adolescents differ in their taste perception. Obese subjects could identify taste qualities less precisely than children and adolescents of normal weight.

  13. Search for Bs0 oscillations using inclusive lepton events

    NASA Astrophysics Data System (ADS)

    ALEPH Collaboration; Barate, R.; et al.

    1999-03-01

    A search for Bs0 oscillations is performed using a sample of semileptonic b-hadron decays collected by the ALEPH experiment during 1991-95. Compared to previous inclusive lepton analyses, the proper time resolution and b-flavour mistag rate are significantly improved. Additional sensitivity to Bs0 mixing is obtained by identifying subsamples of events having a Bs0 purity which is higher than the average for the whole data sample. Unbinned maximum likelihood amplitude fits are performed to derive a lower limit of Δ m s > 9.5 ps-1 at the 95% confidence limit (95% CL). Combining with the ALEPH Ds--based analyses yields Δ m s > 9.6 ps-1 at 95% CL.

  14. Determination of alpha-hydroxy acids in cosmetic products by high-performance liquid chromatography with a narrow-bore column.

    PubMed

    Nicoletti, I; Corradini, C; Cogliandro, E; Cavazza, A

    1999-08-01

    This paper reports the results of a study carried out to develop a simple, rapid and sensitive method for the separation, identification and quantitative measurement of alpha-hydroxy acids in commercial cosmetics using high-performance liquid chromatography (HPLC). This method is successfully applied to the simultaneous identification and quantitative determination of glycolic, lactic, malic, tartaric and citric acids employing a reversed phase narrow-bore column under isocratic condition and UV detection. The method is validated by determining the precision of replicate analyses and accuracy by analyzing samples with and without adding know amount of the alpha-hydroxy acids. The procedure is suitable for routine analyses of commercial cosmetics.

  15. Performance of the Assessment of Spondyloarthritis International Society criteria for the classification of spondyloarthritis in early spondyloarthritis clinics participating in the ESPERANZA programme.

    PubMed

    Tomero, Eva; Mulero, Juan; de Miguel, Eugenio; Fernández-Espartero, Cruz; Gobbo, Milena; Descalzo, Miguel A; Collantes-Estévez, Eduardo; Zarco, Pedro; Muñoz-Fernández, Santiago; Carmona, Loreto

    2014-02-01

    The objective of this study was to analyse the performance of the Assessment of SpondyloArthritis International Society (ASAS) criteria for the classification of SpA in early SpA clinics. We used a cross-sectional study of patients referred to early SpA units within the ESPERANZA programme (a Spanish nationwide health management programme designed to provide excellence in diagnosis and care for early SpA). Patients were eligible if they were <45 years of age and had any of the following: (i) a 2-year history of inflammatory back pain; (ii) back or joint pain with psoriasis, anterior uveitis, radiographic sacroiliitis, family history of SpA or positive HLA-B27; or (iii) asymmetric arthritis. We excluded patients for whom imaging (X-rays/MRI) or HLA-B27 results were not available. We analysed the performance (sensitivity and specificity) of different classification criteria sets, taking the rheumatologist's opinion as the gold standard. The analysis included 775 patients [mean age 33 (s.d. 7) years; 55% men; mean duration of symptoms 11 (s.d. 6) months]; SpA was diagnosed in 538 patients (69.5%). A total of 274 (67.9%) patients with chronic back pain met the ASAS axial criteria, 76 (56.3%) patients with arthritis but not chronic back pain fulfilled the ASAS criteria for peripheral SpA and 350 (65.1%) fulfilled all the ASAS criteria. The sensitivity and specificity of the ASAS criteria set were 65% and 93%, respectively (axial criteria: sensitivity 68%, specificity 95%). The sensitivity and specificity for the ESSG and Amor criteria were 58% and 90% and 59% and 86%, respectively. Despite performing better than the Amor or ESSG criteria, the ASAS criteria may be limited to detection of early forms, particularly in populations in which MRI is not extensively available or in populations with a low prevalence of HLA-B27.

  16. Use of the Analysis of the Volatile Faecal Metabolome in Screening for Colorectal Cancer

    PubMed Central

    2015-01-01

    Diagnosis of colorectal cancer is an invasive and expensive colonoscopy, which is usually carried out after a positive screening test. Unfortunately, existing screening tests lack specificity and sensitivity, hence many unnecessary colonoscopies are performed. Here we report on a potential new screening test for colorectal cancer based on the analysis of volatile organic compounds (VOCs) in the headspace of faecal samples. Faecal samples were obtained from subjects who had a positive faecal occult blood sample (FOBT). Subjects subsequently had colonoscopies performed to classify them into low risk (non-cancer) and high risk (colorectal cancer) groups. Volatile organic compounds were analysed by selected ion flow tube mass spectrometry (SIFT-MS) and then data were analysed using both univariate and multivariate statistical methods. Ions most likely from hydrogen sulphide, dimethyl sulphide and dimethyl disulphide are statistically significantly higher in samples from high risk rather than low risk subjects. Results using multivariate methods show that the test gives a correct classification of 75% with 78% specificity and 72% sensitivity on FOBT positive samples, offering a potentially effective alternative to FOBT. PMID:26086914

  17. Performance optimization of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.

    1991-01-01

    As part of a center-wide activity at NASA Langley Research Center to develop multidisciplinary design procedures by accounting for discipline interactions, a performance design optimization procedure is developed. The procedure optimizes the aerodynamic performance of rotor blades by selecting the point of taper initiation, root chord, taper ratio, and maximum twist which minimize hover horsepower while not degrading forward flight performance. The procedure uses HOVT (a strip theory momentum analysis) to compute the horse power required for hover and the comprehensive helicopter analysis program CAMRAD to compute the horsepower required for forward flight and maneuver. The optimization algorithm consists of the general purpose optimization program CONMIN and approximate analyses. Sensitivity analyses consisting of derivatives of the objective function and constraints are carried out by forward finite differences. The procedure is applied to a test problem which is an analytical model of a wind tunnel model of a utility rotor blade.

  18. Insensitive parenting may accelerate the development of the amygdala-medial prefrontal cortex circuit.

    PubMed

    Thijssen, Sandra; Muetzel, Ryan L; Bakermans-Kranenburg, Marian J; Jaddoe, Vincent W V; Tiemeier, Henning; Verhulst, Frank C; White, Tonya; Van Ijzendoorn, Marinus H

    2017-05-01

    This study examined whether the association between age and amygdala-medial prefrontal cortex (mPFC) connectivity in typically developing 6- to 10-year-old children is correlated with parental care. Resting-state functional magnetic resonance imaging scans were acquired from 124 children of the Generation R Study who at 4 years old had been observed interacting with their parents to assess maternal and paternal sensitivity. Amygdala functional connectivity was assessed using a general linear model with the amygdalae time series as explanatory variables. Higher level analyses assessing Sensitivity × Age as well as exploratory Sensitivity × Age × Gender interaction effects were performed restricted to voxels in the mPFC. We found significant Sensitivity × Age interaction effects on amygdala-mPFC connectivity. Age was related to stronger amygdala-mPFC connectivity in children with a lower combined parental sensitivity score (b = 0.11, p = .004, b = 0.06, p = .06, right and left amygdala, respectively), but not in children with a higher parental sensitivity score, (b = -0.07, p = .12, b = -0.06, p = .12, right and left amygdala, respectively). A similar effect was found for maternal sensitivity, with stronger amygdala-mPFC connectivity in children with less sensitive mothers. Exploratory (parental, maternal, paternal) Sensitivity × Age × Gender interaction analyses suggested that this effect was especially pronounced in girls. Amygdala-mPFC resting-state functional connectivity has been shown to increase from age 10.5 years onward, implying that the positive association between age and amygdala-mPFC connectivity in 6- to 10-year-old children of less sensitive parents represents accelerated development of the amygdala-mPFC circuit.

  19. Extending the Administration Time of the Letter Fluency Test Increases Sensitivity to Cognitive Status in Aging

    PubMed Central

    Holtzer, R.; Goldin, Y.; Donovick, P.J.

    2010-01-01

    We examined whether extending the administration time of letter fluency from one minute per letter trial (standard administration) to two minutes increased the sensitivity of this test to cognitive status in aging. Participants (mean age = 84.6) were assigned to cognitive impairment (n=20) and control (n=40) groups. Pearson correlations and scatter plot analyses showed that associations between the Dementia Rating Scale scores and letter fluency were higher and less variable when performance on the latter was extended to two minutes. ANOVA showed that the cognitive impairment group generated fewer words in the second minute of the letter fluency task compared to the control group. Finally, discriminant function analyses revealed that extending the letter fluency trials to two minutes increased discrimination between the control and cognitive impairment groups. PMID:19449244

  20. A Systematic Review and Meta-Analyses of the Association Between Anti-Hypertensive Classes and the Risk of Falls Among Older Adults.

    PubMed

    Ang, Hui Ting; Lim, Ka Keat; Kwan, Yu Heng; Tan, Pui San; Yap, Kai Zhen; Banu, Zafirah; Tan, Chuen Seng; Fong, Warren; Thumboo, Julian; Ostbye, Truls; Low, Lian Leng

    2018-06-23

    Falls in individuals aged ≥ 60 years may result in injury, hospitalisation or death. The role of anti-hypertensive medications in falls among older adults is unclear. The objective of this study was to assess the association of six anti-hypertensive medication classes, namely α-blockers (AB), angiotensin converting enzyme inhibitors (ACEi), angiotensin receptor blockers (ARB), β-blockers (BB), calcium channel blockers (CCB) and diuretics, with the risk of falls, injurious falls or recurrent falls in individuals aged ≥ 60 years compared with non-users. We performed systematic searches in PubMed, EMBASE and CINAHL and included cohort, case-control and cross-sectional studies that investigated the associations between the use of anti-hypertensive medication classes and the risk of falls, injurious falls or recurrent falls in older adults (≥ 60 years) reported in English. We assessed study quality using the Newcastle-Ottawa Scale (NOS). Unadjusted and adjusted odds ratios (ORs) were pooled using random effects model. We performed meta-analyses for each anti-hypertensive medication class and each fall outcome. We also performed sensitivity analyses by pooling studies of high quality and subgroup analyses among studies with an average age of ≥ 80 years. Seventy-eight articles (where 74, 34, 27, 18, 13 and 11 of them examined diuretics, BB, CCB, ACEi, AB and ARB, respectively) met our inclusion and exclusion criteria; we pooled estimates from 60 articles. ACEi [OR 0.85, 95% confidence interval (CI) 0.81-0.89], BB (OR 0.84, 95% CI 0.76-0.93) and CCB (OR 0.81, 95% CI 0.74-0.90) use were associated with a lower risk of injurious falls than in non-users. Results in sensitivity and subgroup analyses were largely consistent. The use of ACEi, BB or CCB among older adults may be associated with a lower risk of injurious falls than non-use.

  1. Causes and Consequences of Missing Health-Related Quality of Life Assessments in Patients Who Undergo Mechanical Circulatory Support Implantation: Insights From INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support).

    PubMed

    Grady, Kathleen L; Jones, Philip G; Cristian-Andrei, Adin; Naftel, David C; Myers, Susan; Dew, Mary Amanda; Idrissi, Katharine; Weidner, Gerdi; Wissman, Sherri A; Kirklin, James K; Spertus, John A

    2017-12-01

    Missing health-related quality of life (HRQOL) data in longitudinal studies can reduce precision and power and bias results. Using INTERMACS (Interagency Registry for Mechanically Assisted Circulatory Support), we sought to identify factors associated with missing HRQOL data, examine the impact of these factors on estimated HRQOL assuming missing at random missingness, and perform sensitivity analyses to examine missing not at random (MNAR) missingness because of illness severity. INTERMACS patients (n=3248) with a preimplantation profile of 1 (critical cardiogenic shock) or 2 (progressive decline) were assessed with the EQ-5D-3L visual analog scale and Kansas City Cardiomyopathy Questionnaire-12 summary scores pre-implantation and 3 months postoperatively. Mean and median observed and missing at random-imputed HRQOL scores were calculated, followed by sensitivity analyses. Independent factors associated with HRQOL scores and missing HRQOL assessments were determined using multivariable regression. Independent factors associated with preimplantation and 3-month HRQOL scores, and with the likelihood of missing HRQOL assessments, revealed few correlates of HRQOL and missing assessments ( R 2 range, 4.7%-11.9%). For patients with INTERMACS profiles 1 and 2 and INTERMACS profile 1 alone, missing at random-imputed mean and median HRQOL scores were similar to observed scores, before and 3 months after implantation, whereas MNAR-imputed mean scores were lower (≥5 points) at baseline but not at 3 months. We recommend use of sensitivity analyses using an MNAR imputation strategy for longitudinal studies when missingness is attributable to illness severity. Conduct of MNAR sensitivity analyses may be less critical after mechanical circulatory support implant, when there are likely fewer MNAR data. © 2017 American Heart Association, Inc.

  2. Assessment of self taken swabs versus clinician taken swab cultures for diagnosing gonorrhoea in women: single centre, diagnostic accuracy study.

    PubMed

    Stewart, Catherine M W; Schoeman, Sarah A; Booth, Russell A; Smith, Susan D; Wilcox, Mark H; Wilson, Janet D

    2012-12-12

    To compare gonorrhoea detection by self taken vulvovaginal swabs (tested with nucleic acid amplification tests) with the culture of urethral and endocervical samples taken by clinicians. Prospective study of diagnostic accuracy. 1 sexual health clinic in an urban setting (Leeds Centre for Sexual Health, United Kingdom), between March 2009 and January 2010. Women aged 16 years or older, attending the clinic for sexually transmitted infection (STI) testing and consenting to perform a vulvovaginal swab themselves before routine examination. During examination, clinicians took urethral and endocervical samples for culture and an endocervical swab for nucleic acid amplification testing. Urethra and endocervix samples were analysed by gonococcal culture. Vulvovaginal swabs and endocervical swabs were analysed by the Aptima Combo 2 (AC2) assay; positive results from this assay were confirmed with a second nucleic acid amplification test. Positive confirmation of gonorrhoea. Of 3859 women with complete data and test results, 96 (2.5%) were infected with gonorrhoea (overall test sensitivities: culture 81%, endocervical swabs with AC2 96%, vulvovaginal swabs with AC2 99%). The AC2 assays were more sensitive than culture (P<0.001), but the endocervical and vulvovaginal assays did not differ significantly (P=0.375). Specificity of all Aptima Combo 2 tests was 100%. Of 1625 women who had symptoms suggestive of a bacterial STI, 56 (3.4%) had gonorrhoea (culture 84%, endocervical AC2 100%, vulvovaginal AC2 100%). The AC2 assays were more sensitive than culture (P=0.004), and the endocervical and vulvovaginal assays were equivalent to each other. Of 2234 women who did not have symptoms suggesting a bacterial STI, 40 (1.8%) had gonorrhoea (culture 78%, endocervical AC2 90%, vulvovaginal AC2 98%). The vulvovaginal swab was more sensitive than culture (P=0.008), but there was no difference between the endocervical and vulvovaginal AC2 assays (P=0.375) or between the endocervical AC2 assay and culture (P=0.125). The endocervical swab assay performed less well in women without symptoms of a bacterial STI than in those with symptoms (90% v 100%, P=0.028), whereas the vulvovaginal swab assay performed similarly (98% v 100%, P=0.42). Self taken vulvovaginal swabs analysed by nucleic acid amplification tests are significantly more sensitive at detecting gonorrhoea than culture of clinician taken urethral and endocervical samples, and are equivalent to endocervical swabs analysed by nucleic acid amplification tests. Self taken vulvovaginal swabs are the sample of choice in women without symptoms and have the advantage of being non-invasive. In women who need a clinical examination, either a clinician taken or self taken vulvovaginal swab is recommended.

  3. Sensitivity analysis of add-on price estimate for select silicon wafering technologies

    NASA Technical Reports Server (NTRS)

    Mokashi, A. R.

    1982-01-01

    The cost of producing wafers from silicon ingots is a major component of the add-on price of silicon sheet. Economic analyses of the add-on price estimates and their sensitivity internal-diameter (ID) sawing, multiblade slurry (MBS) sawing and fixed-abrasive slicing technique (FAST) are presented. Interim price estimation guidelines (IPEG) are used for estimating a process add-on price. Sensitivity analysis of price is performed with respect to cost parameters such as equipment, space, direct labor, materials (blade life) and utilities, and the production parameters such as slicing rate, slices per centimeter and process yield, using a computer program specifically developed to do sensitivity analysis with IPEG. The results aid in identifying the important cost parameters and assist in deciding the direction of technology development efforts.

  4. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    PubMed

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  5. Nanoplasmonic biochips for rapid label-free detection of imidacloprid pesticides with a smartphone.

    PubMed

    Lee, Kuang-Li; You, Meng-Lin; Tsai, Chia-Hsin; Lin, En-Hung; Hsieh, Shu-Yi; Ho, Ming-Hsun; Hsu, Ju-Chun; Wei, Pei-Kuen

    2016-01-15

    The widespread and intensive use of neonicotinoid insecticides induces negative cascading effects on ecosystems. It is desirable to develop a portable sensitive sensing platform for on-site screening of high-risk pesticides. We combined an indirect competitive immunoassay, highly sensitive surface plasmon resonance (SPR) biochip and a simple portable imaging setup for label-free detection of imidacloprid pesticides. The SPR biochip consists of several capped nanoslit arrays with different periods which form a spectral image on the chip. The qualitative and semiquantitative analyses of pesticides can be directly observed from the spot shift on the chip. The precise semiquantitative analyses can be further completed by using image processing in a smartphone. We demonstrate simultaneous detection of four different concentrations of imidacloprid pesticides. The visual detection limit is about 1ppb, which is well below the maximum residue concentration permitted by law (20ppb). Compared to the one-step strip assay, the proposed chip is capable of performing semiquantitative analyses and multiple detection. Compared to the enzyme-linked immunosorbent assay, our method is label-free and requires simple washing steps and short reaction time. In addition, the label-free chip has a comparable sensitivity but wider working range than those labeling techniques. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. A framework for improving a seasonal hydrological forecasting system using sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Arnal, Louise; Pappenberger, Florian; Smith, Paul; Cloke, Hannah

    2017-04-01

    Seasonal streamflow forecasts are of great value for the socio-economic sector, for applications such as navigation, flood and drought mitigation and reservoir management for hydropower generation and water allocation to agriculture and drinking water. However, as we speak, the performance of dynamical seasonal hydrological forecasting systems (systems based on running seasonal meteorological forecasts through a hydrological model to produce seasonal hydrological forecasts) is still limited in space and time. In this context, the ESP (Ensemble Streamflow Prediction) remains an attractive forecasting method for seasonal streamflow forecasting as it relies on forcing a hydrological model (starting from the latest observed or simulated initial hydrological conditions) with historical meteorological observations. This makes it cheaper to run than a standard dynamical seasonal hydrological forecasting system, for which the seasonal meteorological forecasts will first have to be produced, while still producing skilful forecasts. There is thus the need to focus resources and time towards improvements in dynamical seasonal hydrological forecasting systems which will eventually lead to significant improvements in the skill of the streamflow forecasts generated. Sensitivity analyses are a powerful tool that can be used to disentangle the relative contributions of the two main sources of errors in seasonal streamflow forecasts, namely the initial hydrological conditions (IHC; e.g., soil moisture, snow cover, initial streamflow, among others) and the meteorological forcing (MF; i.e., seasonal meteorological forecasts of precipitation and temperature, input to the hydrological model). Sensitivity analyses are however most useful if they inform and change current operational practices. To this end, we propose a method to improve the design of a seasonal hydrological forecasting system. This method is based on sensitivity analyses, informing the forecasters as to which element of the forecasting chain (i.e., IHC or MF) could potentially lead to the highest increase in seasonal hydrological forecasting performance, after each forecast update.

  7. Bronchial and non-bronchial systemic arteries: value of multidetector CT angiography in diagnosis and angiographic embolisation feasibility analysis.

    PubMed

    Lin, Yuning; Chen, Ziqian; Yang, Xizhang; Zhong, Qun; Zhang, Hongwen; Yang, Li; Xu, Shangwen; Li, Hui

    2013-12-01

    The aim of this study is to evaluate the diagnostic performance of multidetector CT angiography (CTA) in depicting bronchial and non-bronchial systemic arteries in patients with haemoptysis and to assess whether this modality helps determine the feasibility of angiographic embolisation. Fifty-two patients with haemoptysis between January 2010 and July 2011 underwent both preoperative multidetector CTA and digital subtraction angiography (DSA) imaging. Diagnostic performance of CTA in depicting arteries causing haemoptysis was assessed on a per-patient and a per-artery basis. The feasibility of the endovascular treatment evaluated by CTA was analysed. Sensitivity, specificity, and positive and negative predictive values for those analyses were determined. Fifty patients were included in the artery-presence-number analysis. In the per-patient analysis, neither CTA (P = 0.25) nor DSA (P = 1.00) showed statistical difference in the detection of arteries causing haemoptysis. The sensitivity, specificity, and positive and negative predictive values were 94%, 100%, 100%, and 40%, respectively, for the presence of pathologic arteries evaluated by CTA, and 98%, 100%, 100%, and 67%, respectively, for DSA. On the per-artery basis, CTA correctly identified 97% (107/110). Fifty-two patients were included in the feasibility analysis. The performance of CTA in predicting the feasibility of angiographic embolisation was not statistically different from the treatment performed (P = 1.00). The sensitivity, specificity, and positive and negative predictive values were 96%, 80%, 98% and 67%, respectively, for CTA. Multidetector CTA is an accurate imaging method in depicting the presence and number of arteries causing haemoptysis. This modality is also useful for determining the feasibility of angiographic embolisation for haemoptysis. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.

  8. Using the STOPBANG questionnaire and other pre-test probability tools to predict OSA in younger, thinner patients referred to a sleep medicine clinic.

    PubMed

    McMahon, Michael J; Sheikh, Karen L; Andrada, Teotimo F; Holley, Aaron B

    2017-12-01

    The STOPBANG questionnaire is used to predict the presence of obstructive sleep apnea (OSA). We sought to assess the performance of the STOPBANG questionnaire in younger, thinner patients referred to a sleep medicine clinic. We applied the STOPBANG questionnaire to patients referred for level I polysomnography (PSG) at our sleep center. We calculated likelihood ratios and area under the receiver operator characteristic (AUROC) curve and performed sensitivity analyses. We performed our analysis on 338 patients referred for PSG. Only 17.2% (n = 58) were above age 50 years, and 30.5 and 6.8% had a BMI above 30 and 35 years, respectively. The mean apnea-hypopnea index (AHI) was 12.9 ± 16.4 and 63.9% had an AHI ≥5. The STOPBANG (threshold ≥3) identified 83.1% of patients as high risk for an AHI ≥5, and sensitivity, specificity, positive (PPV), and negative predictive values (NPV) were 83.8, 18.0, 64.4, and 38.0%, respectively. Positive and negative likelihood ratios were poor at 1.02-1.11 and 0.55-0.90, respectively, across AHI thresholds (AHI ≥5, AHI ≥15 and AHI ≥30), and AUROCs were 0.52 (AHI ≥5) and 0.56 (AHI ≥15). Sensitivity analyses adjusting for insomnia, combat deployment, traumatic brain injury, post-traumatic stress disorder, clinically significant OSA (ESS >10 and/or co-morbid disease), and obesity did not significantly alter STOPBANG performance. In a younger, thinner population with predominantly mild-to-moderate OSA, the STOPBANG Score does not accurately predict the presence of obstructive sleep apnea.

  9. Systematic Review of Health Economic Evaluations of Diagnostic Tests in Brazil: How accurate are the results?

    PubMed

    Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho

    2017-08-01

    The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.

  10. Emergency electroencephalogram: Usefulness in the diagnosis of nonconvulsive status epilepticus by the on-call neurologist.

    PubMed

    Máñez Miró, J U; Díaz de Terán, F J; Alonso Singer, P; Aguilar-Amat Prior, M J

    2018-03-01

    We aim to describe the use of emergency electroencephalogram (EmEEG) by the on-call neurologist when nonconvulsive status epilepticus (NCSE) is suspected, and in other indications, in a tertiary hospital. Observational retrospective cohort study of emergency EEG (EmEEG) recordings with 8-channel systems performed and analysed by the on-call neurologist in the emergency department and in-hospital wards between July 2013 and May 2015. Variables recorded were sex, age, symptoms, first diagnosis, previous seizure and cause, previous stroke, cancer, brain computed tomography, diagnosis after EEG, treatment, patient progress, routine control EEG (rEEG), and final diagnosis. We analysed frequency data, sensitivity, and specificity in the diagnosis of NCSE. The study included 135 EEG recordings performed in 129 patients; 51.4% were men and their median age was 69 years. In 112 cases (83%), doctors ruled out suspected NCSE because of altered level of consciousness in 42 (37.5%), behavioural abnormalities in 38 (33.9%), and aphasia in 32 (28.5%). The EmEEG diagnosis was NCSE in 37 patients (33%), and this was confirmed in 35 (94.6%) as the final diagnosis. In 3 other cases, NCSE was the diagnosis on discharge as confirmed by rEEG although the EmEEG missed this condition at first. EmEEG performed to rule out NCSE showed 92.1% sensitivity, 97.2% specificity, a positive predictive value of 94.6%, and a negative predictive value of 96%. Our experience finds that, in an appropriate clinical context, EmEEG performed by the on-call neurologist is a sensitive and specific tool for diagnosing NCSE. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  11. Local birefringence of the anterior segment of the human eye in a single capture with a full range polarisation-sensitive optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Li, Qingyun; Karnowski, Karol; Villiger, Martin; Sampson, David D.

    2017-04-01

    A fibre-based full-range polarisation-sensitive optical coherence tomography system is developed to enable complete capture of the structural and birefringence properties of the anterior segment of the human eye in a single acquisition. The system uses a wavelength swept source centered at 1.3 μm, passively depth-encoded, orthogonal polarisation states in the illumination path and polarisation-diversity detection. Off-pivot galvanometer scanning is used to extend the imaging range and compensate for sensitivity drop-off. A Mueller matrix-based method is used to analyse data. We demonstrate the performance of the system and discuss issues relating to its optimisation.

  12. Revisiting the cost-effectiveness of universal cervical length screening: importance of progesterone efficacy.

    PubMed

    Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John

    2016-07-01

    Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Impact of design-parameters on the optical performance of a high-power adaptive mirror

    NASA Astrophysics Data System (ADS)

    Koek, Wouter D.; Nijkerk, David; Smeltink, Jeroen A.; van den Dool, Teun C.; van Zwet, Erwin J.; van Baars, Gregor E.

    2017-02-01

    TNO is developing a High Power Adaptive Mirror (HPAM) to be used in the CO2 laser beam path of an Extreme Ultra- Violet (EUV) light source for next-generation lithography. In this paper we report on a developed methodology, and the necessary simulation tools, to assess the performance and associated sensitivities of this deformable mirror. Our analyses show that, given the current limited insight concerning the process window of EUV generation, the HPAM module should have an actuator pitch of <= 4 mm. Furthermore we have modelled the sensitivity of performance with respect to dimpling and actuator noise. For example, for a deformable mirror with an actuator pitch of 4 mm, and if the associated performance impact is to be limited to smaller than 5%, the actuator noise should be smaller than 45 nm (rms). Our tools assist in the detailed design process by assessing the performance impact of various design choices, including for example those that affect the shape and spectral content of the influence function.

  14. Development of EMab-51, a Sensitive and Specific Anti-Epidermal Growth Factor Receptor Monoclonal Antibody in Flow Cytometry, Western Blot, and Immunohistochemistry.

    PubMed

    Itai, Shunsuke; Kaneko, Mika K; Fujii, Yuki; Yamada, Shinji; Nakamura, Takuro; Yanaka, Miyuki; Saidoh, Noriko; Handa, Saori; Chang, Yao-Wen; Suzuki, Hiroyoshi; Harada, Hiroyuki; Kato, Yukinari

    2017-10-01

    The epidermal growth factor receptor (EGFR) is a member of the human epidermal growth factor receptor (HER) family of receptor tyrosine kinases and is involved in cell growth and differentiation. EGFR homodimers or heterodimers with other HER members, such as HER2 and HER3, activate downstream signaling cascades in many cancers. In this study, we developed novel anti-EGFR monoclonal antibodies (mAbs) and characterized their efficacy in flow cytometry, Western blot, and immunohistochemical analyses. First, we expressed the full-length or ectodomain of EGFR in LN229 glioblastoma cells and then immunized mice with LN229/EGFR or ectodomain of EGFR, and performed the first screening using enzyme-linked immunosorbent assays. Subsequently, we selected mAbs according to their efficacy in flow cytometry (second screening), Western blot (third screening), and immunohistochemical (fourth screening) analyses. Among 100 mAbs, only one clone EMab-51 (IgG 1 , kappa) reacted with EGFR in Western blot analysis. Finally, immunohistochemical analyses with EMab-51 showed sensitive and specific reactions against oral cancer cells, warranting the use of EMab-51 to detect EGFR in pathological analyses of EGFR-expressing cancers.

  15. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    PubMed

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  16. Relationship between cataract severity and socioeconomic status.

    PubMed

    Wesolosky, Jason D; Rudnisky, Christopher J

    2013-12-01

    To determine the relationship between cataract severity and socioeconomic status (SES). Retrospective, observational case series. A total of 1350 eyes underwent phacoemulsification cataract extraction by a single surgeon using an Alcon Infiniti system. Cataract severity was measured using phaco time in seconds. SES was measured using area-level aggregate census data: median income, education, proportion of common-law couples, and employment rate. Preoperative best corrected visual acuity was obtained and converted to logarithm of the minimum angle of resolution values. For patients undergoing bilateral surgery, the generalized estimating equation was used to account for the correlation between eyes. Univariate analyses were performed using simple regression, and multivariate analyses were performed to account for variables with significant relationships (p < 0.05) on univariate testing. Sensitivity analyses were performed to assess the effect of including patient age in the controlled analyses. Multivariate analyses demonstrated that cataracts were more severe when the median income was lower (p = 0.001) and the proportion of common-law couples living in a patient's community (p = 0.012) and the unemployment rate (p = 0.002) were higher. These associations persisted even when controlling for patient age. Patients of lower SES have more severe cataracts. Copyright © 2013 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  17. Human papillomavirus vaccine and demyelinating diseases-A systematic review and meta-analysis.

    PubMed

    Mouchet, Julie; Salvo, Francesco; Raschi, Emanuel; Poluzzi, Elisabetta; Antonazzo, Ippazio Cosimo; De Ponti, Fabrizio; Bégaud, Bernard

    2018-06-01

    Approved in 2006, human papillomavirus (HPV) vaccines were initially targeted for girls aged 9-14 years. Although the safety of these vaccines has been monitored through post-licensure surveillance programmes, cases of neurological events have been reported worldwide. The present study aimed to assess the risk of developing demyelination after HPV immunization by meta-analysing risk estimates from pharmacoepidemiologic studies. A systematic review was conducted in Medline, Embase, ISI Web of Science and the Cochrane Library from inception to 10 May 2017, without language restriction. Only observational studies including a control group were retained. Study selection was performed by two independent reviewers with disagreements solved through discussion. This meta-analysis was performed using a generic inverse variance random-effect model. Outcomes of interest included a broad category of central demyelination, multiple sclerosis (MS), optic neuritis (ON), and Guillain-Barré syndrome (GBS), each being considered independently. Heterogeneity was investigated; sensitivity and subgroup analyses were performed when necessary. In parallel, post-licensure safety studies were considered for a qualitative review. This study followed the PRISMA statement and the MOOSE reporting guideline. Of the 2,863 references identified, 11 articles were selected for meta-analysis. No significant association emerged between HPV vaccination and central demyelination, the pooled odds ratio being 0.96 [95% CI 0.77-1.20], with a moderate but non-significant heterogeneity (I 2  = 29%). Similar results were found for MS and ON. Sensitivity analyses did not alter our conclusions. Findings from qualitative review of 14 safety studies concluded in an absence of a relevant signal. Owing to limited data on GBS, no meta-analysis was performed for this outcome. This study strongly supports the absence of association between HPV vaccines and central demyelination. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Accurate clinical detection of exon copy number variants in a targeted NGS panel using DECoN.

    PubMed

    Fowler, Anna; Mahamdallie, Shazia; Ruark, Elise; Seal, Sheila; Ramsay, Emma; Clarke, Matthew; Uddin, Imran; Wylie, Harriet; Strydom, Ann; Lunter, Gerton; Rahman, Nazneen

    2016-11-25

    Background: Targeted next generation sequencing (NGS) panels are increasingly being used in clinical genomics to increase capacity, throughput and affordability of gene testing. Identifying whole exon deletions or duplications (termed exon copy number variants, 'exon CNVs') in exon-targeted NGS panels has proved challenging, particularly for single exon CNVs.  Methods: We developed a tool for the Detection of Exon Copy Number variants (DECoN), which is optimised for analysis of exon-targeted NGS panels in the clinical setting. We evaluated DECoN performance using 96 samples with independently validated exon CNV data. We performed simulations to evaluate DECoN detection performance of single exon CNVs and to evaluate performance using different coverage levels and sample numbers. Finally, we implemented DECoN in a clinical laboratory that tests BRCA1 and BRCA2 with the TruSight Cancer Panel (TSCP). We used DECoN to analyse 1,919 samples, validating exon CNV detections by multiplex ligation-dependent probe amplification (MLPA).  Results: In the evaluation set, DECoN achieved 100% sensitivity and 99% specificity for BRCA exon CNVs, including identification of 8 single exon CNVs. DECoN also identified 14/15 exon CNVs in 8 other genes. Simulations of all possible BRCA single exon CNVs gave a mean sensitivity of 98% for deletions and 95% for duplications. DECoN performance remained excellent with different levels of coverage and sample numbers; sensitivity and specificity was >98% with the typical NGS run parameters. In the clinical pipeline, DECoN automatically analyses pools of 48 samples at a time, taking 24 minutes per pool, on average. DECoN detected 24 BRCA exon CNVs, of which 23 were confirmed by MLPA, giving a false discovery rate of 4%. Specificity was 99.7%.  Conclusions: DECoN is a fast, accurate, exon CNV detection tool readily implementable in research and clinical NGS pipelines. It has high sensitivity and specificity and acceptable false discovery rate. DECoN is freely available at www.icr.ac.uk/decon.

  19. Comparison of the sensitivity of the UKCAT and A Levels to sociodemographic characteristics: a national study

    PubMed Central

    2014-01-01

    Background The UK Clinical Aptitude Test (UKCAT) was introduced to facilitate widening participation in medical and dental education in the UK by providing universities with a continuous variable to aid selection; one that might be less sensitive to the sociodemographic background of candidates compared to traditional measures of educational attainment. Initial research suggested that males, candidates from more advantaged socioeconomic backgrounds and those who attended independent or grammar schools performed better on the test. The introduction of the A* grade at A level permits more detailed analysis of the relationship between UKCAT scores, secondary educational attainment and sociodemographic variables. Thus, our aim was to further assess whether the UKCAT is likely to add incremental value over A level (predicted or actual) attainment in the selection process. Methods Data relating to UKCAT and A level performance from 8,180 candidates applying to medicine in 2009 who had complete information relating to six key sociodemographic variables were analysed. A series of regression analyses were conducted in order to evaluate the ability of sociodemographic status to predict performance on two outcome measures: A level ‘best of three’ tariff score; and the UKCAT scores. Results In this sample A level attainment was independently and positively predicted by four sociodemographic variables (independent/grammar schooling, White ethnicity, age and professional social class background). These variables also independently and positively predicted UKCAT scores. There was a suggestion that UKCAT scores were less sensitive to educational background compared to A level attainment. In contrast to A level attainment, UKCAT score was independently and positively predicted by having English as a first language and male sex. Conclusions Our findings are consistent with a previous report; most of the sociodemographic factors that predict A level attainment also predict UKCAT performance. However, compared to A levels, males and those speaking English as a first language perform better on UKCAT. Our findings suggest that UKCAT scores may be more influenced by sex and less sensitive to school type compared to A levels. These factors must be considered by institutions utilising the UKCAT as a component of the medical and dental school selection process. PMID:24400861

  20. Search for resonant $$t\\bar{t}$$ production in proton-proton collisions at √s = 8 TeV

    DOE PAGES

    Khachatryan, Vardan

    2016-01-08

    A search is performed for the production of heavy resonances decaying into top-antitop quark pairs in proton-proton collisions at √s = 8 TeV. Data used for the analyses were collected with the CMS detector and correspond to an integrated luminosity of 19.7 fb –1. The search is performed using events with three different final states, defined by the number of leptons (electrons and muons) from the tt¯ → WbWb decay. The analyses are optimized for reconstruction of top quarks with high Lorentz boosts, where jet substructure techniques are used to enhance the sensitivity. Results are presented for all channels andmore » a combination is performed. Furthermore, no significant excess of events relative to the expected yield from standard model processes is observed.« less

  1. Microfluidics, Chromatography, and Atomic-Force Microscopy

    NASA Technical Reports Server (NTRS)

    Anderson, Mark

    2008-01-01

    A Raman-and-atomic-force microscope (RAFM) has been shown to be capable of performing several liquid-transfer and sensory functions essential for the operation of a microfluidic laboratory on a chip that would be used to perform rapid, sensitive chromatographic and spectro-chemical analyses of unprecedentedly small quantities of liquids. The most novel aspect of this development lies in the exploitation of capillary and shear effects at the atomic-force-microscope (AFM) tip to produce shear-driven flow of liquids along open microchannels of a microfluidic device. The RAFM can also be used to perform such functions as imaging liquids in microchannels; removing liquid samples from channels for very sensitive, tip-localized spectrochemical analyses; measuring a quantity of liquid adhering to the tip; and dip-pen deposition from a chromatographic device. A commercial Raman-spectroscopy system and a commercial AFM were integrated to make the RAFM so as to be able to perform simultaneous topographical AFM imaging and surface-enhanced Raman spectroscopy (SERS) at the AFM tip. The Raman-spectroscopy system includes a Raman microprobe attached to an optical microscope, the translation stage of which is modified to accommodate the AFM head. The Raman laser excitation beam, which is aimed at the AFM tip, has a wavelength of 785 nm and a diameter of about 5 m, and its power is adjustable up to 10 mW. The AFM is coated with gold to enable tip-localized SERS.

  2. Systems cost/performance analysis (study 2.3). Volume 3: Programmer's manual and user's guide. [for unmanned spacecraft

    NASA Technical Reports Server (NTRS)

    Janz, R. F.

    1974-01-01

    The systems cost/performance model was implemented as a digital computer program to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses. The computer is described along with the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design. Preliminary results for the DSCS-II design are also included.

  3. Pharmacoeconomic evaluations of pharmacogenetic and genomic screening programmes: a systematic review on content and adherence to guidelines.

    PubMed

    Vegter, Stefan; Boersma, Cornelis; Rozenbaum, Mark; Wilffert, Bob; Navis, Gerjan; Postma, Maarten J

    2008-01-01

    The fields of pharmacogenetics and pharmacogenomics have become important practical tools to progress goals in medical and pharmaceutical research and development. As more screening tests are being developed, with some already used in clinical practice, consideration of cost-effectiveness implications is important. A systematic review was performed on the content of and adherence to pharmacoeconomic guidelines of recent pharmacoeconomic analyses performed in the field of pharmacogenetics and pharmacogenomics. Economic analyses of screening strategies for genetic variations, which were evidence-based and assumed to be associated with drug efficacy or safety, were included in the review. The 20 papers included cover a variety of healthcare issues, including screening tests on several cytochrome P450 (CYP) enzyme genes, thiopurine S-methyltransferase (TMPT) and angiotensin-converting enzyme (ACE) insertion deletion (ACE I/D) polymorphisms. Most economic analyses reported that genetic screening was cost effective and often even clearly dominated existing non-screening strategies. However, we found a lack of standardization regarding aspects such as the perspective of the analysis, factors included in the sensitivity analysis and the applied discount rates. In particular, an important limitation of several studies related to the failure to provide a sufficient evidence-based rationale for an association between genotype and phenotype. Future economic analyses should be conducted utilizing correct methods, with adherence to guidelines and including extensive sensitivity analyses. Most importantly, genetic screening strategies should be based on good evidence-based rationales. For these goals, we provide a list of recommendations for good pharmacoeconomic practice deemed useful in the fields of pharmacogenetics and pharmacogenomics, regardless of country and origin of the economic analysis.

  4. The BILAG-2004 index is sensitive to change for assessment of SLE disease activity.

    PubMed

    Yee, Chee-Seng; Farewell, Vernon; Isenberg, David A; Griffiths, Bridget; Teh, Lee-Suan; Bruce, Ian N; Ahmad, Yasmeen; Rahman, Anisur; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Edwards, Christopher; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Gordon, Caroline

    2009-06-01

    To determine if the BILAG-2004 index is sensitive to change for assessment of SLE disease activity. This was a prospective multi-centre longitudinal study of SLE patients. At every assessment, data were collected on disease activity (BILAG-2004 index) and treatment. Analyses were performed using overall BILAG-2004 index score (as determined by the highest score achieved by any of the individual systems) and all the systems scores. Sensitivity to change was assessed by determining the relationship between change in disease activity and change in therapy between two consecutive visits. Statistical analyses were performed using multinomial logistic regression. There were 1761 assessments from 347 SLE patients that contributed 1414 observations for analysis. An increase in therapy between visits occurred in 22.7% observations, while 37.3% had a decrease in therapy and in 40.0% therapy was unchanged. Increase in overall BILAG-2004 index score was associated with increase in therapy and inversely associated with decrease in therapy. Decrease in overall BILAG-2004 index score was associated with decrease in therapy and was inversely associated with increase in therapy. Changes in overall BILAG-2004 index score were differentially related to change in therapy, with greater change in score having greater predictive power. Increase in the scores of most systems was independently associated with an increase in treatment and there was no significant association between decreases in the score of any system with an increase in therapy. The BILAG-2004 index is sensitive to change and is suitable for use in longitudinal studies of SLE.

  5. MicroRNA-203 Modulates the Radiation Sensitivity of Human Malignant Glioma Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Ji Hyun; Hwang, Yeo Hyun; Lee, David J.

    Purpose: We investigated whether miR-203 could modulate the radiation sensitivity of glioblastoma (GBM) cells and which target gene(s) could be involved. Methods and Materials: Three human malignant glioma (MG) cell lines and normal human astrocytes were transfected with control microRNA, pre-miR-203, or antisense miR-203. Real-time PCR (RT-PCR), clonogenic assays, immunofluorescence, and invasion/migration assays were performed. To predict the target(s), bioinformatics analyses using microRNA target databases were performed. Results: Overexpression of miR-203 increased the radiation sensitivity of all 3 human MG cell lines and prolonged radiation-induced γ-H2AX foci formation. Bioinformatics analyses suggested that miR-203 could be involved in post-transcriptional control of DNAmore » repair, PI3K/AKT, SRC, and JAK/STAT3 and the vascular signaling pathway. Western blot analysis validated the fact that miR-203 downregulated ATM, RAD51, SRC, PLD2, PI3K-AKT, JAK-STAT3, VEGF, HIF-1α, and MMP2. Overexpression of miR-203 inhibited invasion and migration potentials, downregulated SLUG and Vimentin, and upregulated Claudin-1 and ZO1. Conclusions: These data demonstrate that miR-203 potentially controls DNA damage repair via the PI3K/AKT and JAK/STAT3 pathways and may collectively contribute to the modulation of radiation sensitivity in MG cells by inhibiting DNA damage repair, prosurvival signaling, and epithelium-mesenchyme transition. Taken together, these findings demonstrate that miR-203 could be a target for overcoming the radiation resistance of GBM.« less

  6. The BILAG-2004 index is sensitive to change for assessment of SLE disease activity

    PubMed Central

    Farewell, Vernon; Isenberg, David A.; Griffiths, Bridget; Teh, Lee-Suan; Bruce, Ian N.; Ahmad, Yasmeen; Rahman, Anisur; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; Edwards, Christopher; D’Cruz, David; Khamashta, Munther A.; Maddison, Peter; Gordon, Caroline

    2009-01-01

    Objective. To determine if the BILAG-2004 index is sensitive to change for assessment of SLE disease activity. Methods. This was a prospective multi-centre longitudinal study of SLE patients. At every assessment, data were collected on disease activity (BILAG-2004 index) and treatment. Analyses were performed using overall BILAG-2004 index score (as determined by the highest score achieved by any of the individual systems) and all the systems scores. Sensitivity to change was assessed by determining the relationship between change in disease activity and change in therapy between two consecutive visits. Statistical analyses were performed using multinomial logistic regression. Results. There were 1761 assessments from 347 SLE patients that contributed 1414 observations for analysis. An increase in therapy between visits occurred in 22.7% observations, while 37.3% had a decrease in therapy and in 40.0% therapy was unchanged. Increase in overall BILAG-2004 index score was associated with increase in therapy and inversely associated with decrease in therapy. Decrease in overall BILAG-2004 index score was associated with decrease in therapy and was inversely associated with increase in therapy. Changes in overall BILAG-2004 index score were differentially related to change in therapy, with greater change in score having greater predictive power. Increase in the scores of most systems was independently associated with an increase in treatment and there was no significant association between decreases in the score of any system with an increase in therapy. Conclusions. The BILAG-2004 index is sensitive to change and is suitable for use in longitudinal studies of SLE. PMID:19395542

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hadgu, Teklu; Appel, Gordon John

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less

  8. Neptune Aerocapture Systems Analysis

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae

    2004-01-01

    A Neptune Aerocapture Systems Analysis is completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The high fidelity systems analysis is completed by a five center NASA team and includes the following disciplines and analyses: science; mission design; aeroshell configuration screening and definition; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and database definition; initial stability analyses; guidance development; atmospheric flight simulation; computational fluid dynamics and radiation analyses for aeroheating environment definition; thermal protection system design, concepts and sizing; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle. In addition aerocapture results in a 3-4 year reduction in trip time compared to all-propulsive systems. Aerocapture is feasible and performance is adequate for the Neptune aerocapture mission. Monte Carlo simulation results show 100% successful capture for all cases including conservative assumptions on atmosphere and navigation. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods and validation for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads, and the effects on surface recession.

  9. Optimizing a multi-product closed-loop supply chain using NSGA-II, MOSA, and MOPSO meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar

    2018-07-01

    This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.

  10. Optimizing a multi-product closed-loop supply chain using NSGA-II, MOSA, and MOPSO meta-heuristic algorithms

    NASA Astrophysics Data System (ADS)

    Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar

    2017-07-01

    This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.

  11. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    NASA Technical Reports Server (NTRS)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  12. Stability of Reference Genes for Real-Time PCR Analyses in Channel Catfish (Ictalurus punctatus) Tissues Under Varying Physiological Conditions

    USDA-ARS?s Scientific Manuscript database

    Real-time PCR is a highly sensitive, relatively easy to perform assay for quantifying mRNA abundance. However, there are several complexities built into the assay that can affect data interpretation. Most notably, the selection of an appropriate internal control for normalization is essential for ...

  13. Users guide for EASI graphics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sasser, D.W.

    1978-03-01

    EASI (Estimate of Adversary Sequence Interruption) is an analytical technique for measuring the effectiveness of physical protection systems. EASI Graphics is a computer graphics extension of EASI which provides a capability for performing sensitivity and trade-off analyses of the parameters of a physical protection system. This document reports on the implementation of EASI Graphics and illustrates its application with some examples.

  14. Using SPEEDES to simulate the blue gene interconnect network

    NASA Technical Reports Server (NTRS)

    Springer, P.; Upchurch, E.

    2003-01-01

    JPL and the Center for Advanced Computer Architecture (CACR) is conducting application and simulation analyses of BG/L in order to establish a range of effectiveness for the Blue Gene/L MPP architecture in performing important classes of computations and to determine the design sensitivity of the global interconnect network in support of real world ASCI application execution.

  15. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: An example from a vertigo phase III study with longitudinal count data as primary endpoint

    PubMed Central

    2012-01-01

    Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944

  16. Bayesian model selection techniques as decision support for shaping a statistical analysis plan of a clinical trial: an example from a vertigo phase III study with longitudinal count data as primary endpoint.

    PubMed

    Adrion, Christine; Mansmann, Ulrich

    2012-09-10

    A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.

  17. Performance Tested Method multiple laboratory validation study of ELISA-based assays for the detection of peanuts in food.

    PubMed

    Park, Douglas L; Coates, Scott; Brewer, Vickery A; Garber, Eric A E; Abouzied, Mohamed; Johnson, Kurt; Ritter, Bruce; McKenzie, Deborah

    2005-01-01

    Performance Tested Method multiple laboratory validations for the detection of peanut protein in 4 different food matrixes were conducted under the auspices of the AOAC Research Institute. In this blind study, 3 commercially available ELISA test kits were validated: Neogen Veratox for Peanut, R-Biopharm RIDASCREEN FAST Peanut, and Tepnel BioKits for Peanut Assay. The food matrixes used were breakfast cereal, cookies, ice cream, and milk chocolate spiked at 0 and 5 ppm peanut. Analyses of the samples were conducted by laboratories representing industry and international and U.S governmental agencies. All 3 commercial test kits successfully identified spiked and peanut-free samples. The validation study required 60 analyses on test samples at the target level 5 microg peanut/g food and 60 analyses at a peanut-free level, which was designed to ensure that the lower 95% confidence limit for the sensitivity and specificity would not be <90%. The probability that a test sample contains an allergen given a prevalence rate of 5% and a positive test result using a single test kit analysis with 95% sensitivity and 95% specificity, which was demonstrated for these test kits, would be 50%. When 2 test kits are run simultaneously on all samples, the probability becomes 95%. It is therefore recommended that all field samples be analyzed with at least 2 of the validated kits.

  18. Responding to nonwords in the lexical decision task: Insights from the English Lexicon Project.

    PubMed

    Yap, Melvin J; Sibley, Daragh E; Balota, David A; Ratcliff, Roger; Rueckl, Jay

    2015-05-01

    Researchers have extensively documented how various statistical properties of words (e.g., word frequency) influence lexical processing. However, the impact of lexical variables on nonword decision-making performance is less clear. This gap is surprising, because a better specification of the mechanisms driving nonword responses may provide valuable insights into early lexical processes. In the present study, item-level and participant-level analyses were conducted on the trial-level lexical decision data for almost 37,000 nonwords in the English Lexicon Project in order to identify the influence of different psycholinguistic variables on nonword lexical decision performance and to explore individual differences in how participants respond to nonwords. Item-level regression analyses reveal that nonword response time was positively correlated with number of letters, number of orthographic neighbors, number of affixes, and base-word number of syllables, and negatively correlated with Levenshtein orthographic distance and base-word frequency. Participant-level analyses also point to within- and between-session stability in nonword responses across distinct sets of items, and intriguingly reveal that higher vocabulary knowledge is associated with less sensitivity to some dimensions (e.g., number of letters) but more sensitivity to others (e.g., base-word frequency). The present findings provide well-specified and interesting new constraints for informing models of word recognition and lexical decision. (c) 2015 APA, all rights reserved).

  19. Establishment of CMab-43, a Sensitive and Specific Anti-CD133 Monoclonal Antibody, for Immunohistochemistry.

    PubMed

    Itai, Shunsuke; Fujii, Yuki; Nakamura, Takuro; Chang, Yao-Wen; Yanaka, Miyuki; Saidoh, Noriko; Handa, Saori; Suzuki, Hiroyoshi; Harada, Hiroyuki; Yamada, Shinji; Kaneko, Mika K; Kato, Yukinari

    2017-10-01

    CD133, also known as prominin-1, was first described as a cell surface marker on early progenitor and hematopoietic stem cells. It is a five-domain transmembrane protein composed of an N-terminal extracellular tail, two small cytoplasmic loops, two large extracellular loops containing seven potential glycosylation sites, and a short C-terminal intracellular tail. CD133 has been used as a marker to identify cancer stem cells derived from primary solid tumors and as a prognostic marker of gliomas. Herein, we developed novel anti-CD133 monoclonal antibodies (mAbs) and characterized their efficacy in flow cytometry, Western blot, and immunohistochemical analyses. We expressed the full length of CD133 in LN229 glioblastoma cells, immunized mice with LN229/CD133 cells, and performed the first screening using flow cytometry. After limiting dilution, we established 100 anti-CD133 mAbs, reacting with LN229/CD133 cells but not with LN229 cells. Subsequently, we performed the second and third screening with Western blot and immunohistochemical analyses, respectively. Among 100 mAbs, 11 strongly reacted with CD133 in Western blot analysis. One of 11 clones, CMab-43 (IgG 2a , kappa), showed a sensitive and specific reaction against colon cancer cells, warranting the use of CMab-43 in detecting CD133 in pathological analyses of CD133-expressing cancers.

  20. Performance of vegetation indices from Landsat time series in deforestation monitoring

    NASA Astrophysics Data System (ADS)

    Schultz, Michael; Clevers, Jan G. P. W.; Carter, Sarah; Verbesselt, Jan; Avitabile, Valerio; Quang, Hien Vu; Herold, Martin

    2016-10-01

    The performance of Landsat time series (LTS) of eight vegetation indices (VIs) was assessed for monitoring deforestation across the tropics. Three sites were selected based on differing remote sensing observation frequencies, deforestation drivers and environmental factors. The LTS of each VI was analysed using the Breaks For Additive Season and Trend (BFAST) Monitor method to identify deforestation. A robust reference database was used to evaluate the performance regarding spatial accuracy, sensitivity to observation frequency and combined use of multiple VIs. The canopy cover sensitive Normalized Difference Fraction Index (NDFI) was the most accurate. Among those tested, wetness related VIs (Normalized Difference Moisture Index (NDMI) and the Tasselled Cap wetness (TCw)) were spatially more accurate than greenness related VIs (Normalized Difference Vegetation Index (NDVI) and Tasselled Cap greenness (TCg)). When VIs were fused on feature level, spatial accuracy was improved and overestimation of change reduced. NDVI and NDFI produced the most robust results when observation frequency varies.

  1. Amplitude analysis of four-body decays using a massively-parallel fitting framework

    NASA Astrophysics Data System (ADS)

    Hasse, C.; Albrecht, J.; Alves, A. A., Jr.; d'Argent, P.; Evans, T. D.; Rademacker, J.; Sokoloff, M. D.

    2017-10-01

    The GooFit Framework is designed to perform maximum-likelihood fits for arbitrary functions on various parallel back ends, for example a GPU. We present an extension to GooFit which adds the functionality to perform time-dependent amplitude analyses of pseudoscalar mesons decaying into four pseudoscalar final states. Benchmarks of this functionality show a significant performance increase when utilizing a GPU compared to a CPU. Furthermore, this extension is employed to study the sensitivity on the {{{D}}}0-{\\bar{{{D}}}}0 mixing parameters x and y in a time-dependent amplitude analysis of the decay D0 → K+π-π+π-. Studying a sample of 50 000 events and setting the central values to the world average of x = (0.49 ± 0.15)% and y = (0.61 ± 0.08)%, the statistical sensitivities of x and y are determined to be σ(x) = 0.019 % and σ(y) = 0.019 %.

  2. Choking under social pressure: social monitoring among the lonely.

    PubMed

    Knowles, Megan L; Lucas, Gale M; Baumeister, Roy F; Gardner, Wendi L

    2015-06-01

    Lonely individuals may decode social cues well but have difficulty putting such skills to use precisely when they need them--in social situations. In four studies, we examined whether lonely people choke under social pressure by asking participants to complete social sensitivity tasks framed as diagnostic of social skills or nonsocial skills. Across studies, lonely participants performed worse than nonlonely participants on social sensitivity tasks framed as tests of social aptitude, but they performed just as well or better than the nonlonely when the same tasks were framed as tests of academic aptitude. Mediational analyses in Study 3 and misattribution effects in Study 4 indicate that anxiety plays an important role in this choking effect. This research suggests that lonely individuals may not need to acquire social skills to escape loneliness; instead, they must learn to cope with performance anxiety in interpersonal interactions. © 2015 by the Society for Personality and Social Psychology, Inc.

  3. Analyses of microstructural and elastic properties of porous SOFC cathodes based on focused ion beam tomography

    NASA Astrophysics Data System (ADS)

    Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan

    2015-01-01

    Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.

  4. Which Measures of Online Control Are Least Sensitive to Offline Processes?

    PubMed

    de Grosbois, John; Tremblay, Luc

    2018-02-28

    A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.

  5. The diagnostic performance of shear-wave elastography for liver fibrosis in children and adolescents: A systematic review and diagnostic meta-analysis.

    PubMed

    Kim, Jeong Rye; Suh, Chong Hyun; Yoon, Hee Mang; Lee, Jin Seong; Cho, Young Ah; Jung, Ah Young

    2018-03-01

    To assess the diagnostic performance of shear-wave elastography for determining the severity of liver fibrosis in children and adolescents. An electronic literature search of PubMed and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver-operating-characteristic modelling were performed to evaluate the diagnostic performance of shear-wave elastography. Meta-regression and subgroup analyses according to the modality of shear-wave imaging and the degree of liver fibrosis were also performed. Twelve eligible studies with 550 patients were included. Shear-wave elastography showed a summary sensitivity of 81 % (95 % CI: 71-88) and a specificity of 91 % (95 % CI: 83-96) for the prediction of significant liver fibrosis. The number of measurements of shear-wave elastography performed was a significant factor influencing study heterogeneity. Subgroup analysis revealed shear-wave elastography to have an excellent diagnostic performance according to each degree of liver fibrosis. Supersonic shear imaging (SSI) had a higher sensitivity (p<.01) and specificity (p<.01) than acoustic radiation force impulse imaging (ARFI). Shear-wave elastography is an excellent modality for the evaluation of the severity of liver fibrosis in children and adolescents. Compared with ARFI, SSI showed better diagnostic performance for prediction of significant liver fibrosis. • Shear-wave elastography is beneficial for determining liver fibrosis severity in children. • Shear-wave elastography showed summary sensitivity of 81 %, specificity of 91 %. • SSI showed better diagnostic performance than ARFI for significant liver fibrosis.

  6. NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flach, G.

    2009-02-28

    The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less

  7. Analysis of Lidar Remote Sensing Concepts

    NASA Technical Reports Server (NTRS)

    Spiers, Gary D.

    1999-01-01

    Line of sight velocity and measurement position sensitivity analyses for an orbiting coherent Doppler lidar are developed and applied to two lidars, one with a nadir angle of 30 deg. in a 300 km altitude, 58 deg. inclination orbit and the second for a 45 deg. nadir angle instrument in a 833 km altitude, 89 deg. inclination orbit. The effect of orbit related effects on the backscatter sensitivity of a coherent Doppler lidar is also discussed. Draft performance estimate, error budgets and payload accommodation requirements for the SPARCLE (Space Readiness Coherent Lidar) instrument were also developed and documented.

  8. Progress in multidisciplinary design optimization at NASA Langley

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.

    1993-01-01

    Multidisciplinary Design Optimization refers to some combination of disciplinary analyses, sensitivity analysis, and optimization techniques used to design complex engineering systems. The ultimate objective of this research at NASA Langley Research Center is to help the US industry reduce the costs associated with development, manufacturing, and maintenance of aerospace vehicles while improving system performance. This report reviews progress towards this objective and highlights topics for future research. Aerospace design problems selected from the author's research illustrate strengths and weaknesses in existing multidisciplinary optimization techniques. The techniques discussed include multiobjective optimization, global sensitivity equations and sequential linear programming.

  9. Methods for comparative evaluation of propulsion system designs for supersonic aircraft

    NASA Technical Reports Server (NTRS)

    Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.

    1976-01-01

    The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.

  10. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  11. Using global sensitivity analysis to understand higher order interactions in complex models: an application of GSA on the Revised Universal Soil Loss Equation (RUSLE) to quantify model sensitivity and implications for ecosystem services management in Costa Rica

    NASA Astrophysics Data System (ADS)

    Fremier, A. K.; Estrada Carmona, N.; Harper, E.; DeClerck, F.

    2011-12-01

    Appropriate application of complex models to estimate system behavior requires understanding the influence of model structure and parameter estimates on model output. To date, most researchers perform local sensitivity analyses, rather than global, because of computational time and quantity of data produced. Local sensitivity analyses are limited in quantifying the higher order interactions among parameters, which could lead to incomplete analysis of model behavior. To address this concern, we performed a GSA on a commonly applied equation for soil loss - the Revised Universal Soil Loss Equation. USLE is an empirical model built on plot-scale data from the USA and the Revised version (RUSLE) includes improved equations for wider conditions, with 25 parameters grouped into six factors to estimate long-term plot and watershed scale soil loss. Despite RUSLE's widespread application, a complete sensitivity analysis has yet to be performed. In this research, we applied a GSA to plot and watershed scale data from the US and Costa Rica to parameterize the RUSLE in an effort to understand the relative importance of model factors and parameters across wide environmental space. We analyzed the GSA results using Random Forest, a statistical approach to evaluate parameter importance accounting for the higher order interactions, and used Classification and Regression Trees to show the dominant trends in complex interactions. In all GSA calculations the management of cover crops (C factor) ranks the highest among factors (compared to rain-runoff erosivity, topography, support practices, and soil erodibility). This is counter to previous sensitivity analyses where the topographic factor was determined to be the most important. The GSA finding is consistent across multiple model runs, including data from the US, Costa Rica, and a synthetic dataset of the widest theoretical space. The three most important parameters were: Mass density of live and dead roots found in the upper inch of soil (C factor), slope angle (L and S factor), and percentage of land area covered by surface cover (C factor). Our findings give further support to the importance of vegetation as a vital ecosystem service provider - soil loss reduction. Concurrent, progress is already been made in Costa Rica, where dam managers are moving forward on a Payment for Ecosystem Services scheme to help keep private lands forested and to improve crop management through targeted investments. Use of complex watershed models, such as RUSLE can help managers quantify the effect of specific land use changes. Moreover, effective land management of vegetation has other important benefits, such as bundled ecosystem services (e.g. pollination, habitat connectivity, etc) and improvements of communities' livelihoods.

  12. Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.

    PubMed

    Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael

    2012-07-01

    To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Importance analysis for Hudson River PCB transport and fate model parameters using robust sensitivity studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, S.; Toll, J.; Cothern, K.

    1995-12-31

    The authors have performed robust sensitivity studies of the physico-chemical Hudson River PCB model PCHEPM to identify the parameters and process uncertainties contributing the most to uncertainty in predictions of water column and sediment PCB concentrations, over the time period 1977--1991 in one segment of the lower Hudson River. The term ``robust sensitivity studies`` refers to the use of several sensitivity analysis techniques to obtain a more accurate depiction of the relative importance of different sources of uncertainty. Local sensitivity analysis provided data on the sensitivity of PCB concentration estimates to small perturbations in nominal parameter values. Range sensitivity analysismore » provided information about the magnitude of prediction uncertainty associated with each input uncertainty. Rank correlation analysis indicated which parameters had the most dominant influence on model predictions. Factorial analysis identified important interactions among model parameters. Finally, term analysis looked at the aggregate influence of combinations of parameters representing physico-chemical processes. The authors scored the results of the local and range sensitivity and rank correlation analyses. The authors considered parameters that scored high on two of the three analyses to be important contributors to PCB concentration prediction uncertainty, and treated them probabilistically in simulations. They also treated probabilistically parameters identified in the factorial analysis as interacting with important parameters. The authors used the term analysis to better understand how uncertain parameters were influencing the PCB concentration predictions. The importance analysis allowed us to reduce the number of parameters to be modeled probabilistically from 16 to 5. This reduced the computational complexity of Monte Carlo simulations, and more importantly, provided a more lucid depiction of prediction uncertainty and its causes.« less

  14. Photochemical modeling and analysis of meteorological parameters during ozone episodes in Kaohsiung, Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min

    The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.

  15. A comparative study on the environmental impact of supermarket refrigeration systems using low GWP refrigerants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beshr, M.; Aute, V.; Sharma, V.

    Supermarket refrigeration systems have high environmental impact due to their large refrigerant charge and high leak rates. Consequently, the interest in using low GWP refrigerants such as carbon dioxide (CO 2) and new refrigerant blends is increasing. In this study, an open-source Life Cycle Climate Performance (LCCP) framework is presented and used to compare the environmental impact of four supermarket refrigeration systems: a transcritical CO 2 booster system, a cascade CO 2/N-40 system, a combined secondary circuit with central DX N-40/L-40 system, and a baseline multiplex direct expansion system utilizing R-404A and N-40. The study is performed for different climatesmore » within the USA using EnergyPlus to simulate the systems' hourly performance. Finally, further analyses are presented such as parametric, sensitivity, and uncertainty analyses to study the impact of different system parameters on the LCCP.« less

  16. Aerocapture Systems Analysis for a Neptune Mission

    NASA Technical Reports Server (NTRS)

    Lockwood, Mary Kae; Edquist, Karl T.; Starr, Brett R.; Hollis, Brian R.; Hrinda, Glenn A.; Bailey, Robert W.; Hall, Jeffery L.; Spilker, Thomas R.; Noca, Muriel A.; O'Kongo, N.

    2006-01-01

    A Systems Analysis was completed to determine the feasibility, benefit and risk of an aeroshell aerocapture system for Neptune and to identify technology gaps and technology performance goals. The systems analysis includes the following disciplines: science; mission design; aeroshell configuration; interplanetary navigation analyses; atmosphere modeling; computational fluid dynamics for aerodynamic performance and aeroheating environment; stability analyses; guidance development; atmospheric flight simulation; thermal protection system design; mass properties; structures; spacecraft design and packaging; and mass sensitivities. Results show that aerocapture is feasible and performance is adequate for the Neptune mission. Aerocapture can deliver 1.4 times more mass to Neptune orbit than an all-propulsive system for the same launch vehicle and results in a 3-4 year reduction in trip time compared to all-propulsive systems. Enabling technologies for this mission include TPS manufacturing; and aerothermodynamic methods for determining coupled 3-D convection, radiation and ablation aeroheating rates and loads.

  17. Radioactive Waste Management Complex low-level waste radiological performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maheras, S.J.; Rood, A.S.; Magnuson, S.O.

    This report documents the projected radiological dose impacts associated with the disposal of radioactive low-level waste at the Radioactive Waste Management Complex at the Idaho National Engineering Laboratory. This radiological performance assessment was conducted to evaluate compliance with applicable radiological criteria of the US Department of Energy and the US Environmental Protection Agency for protection of the public and the environment. The calculations involved modeling the transport of radionuclides from buried waste, to surface soil and subsurface media, and eventually to members of the public via air, groundwater, and food chain pathways. Projections of doses were made for both offsitemore » receptors and individuals inadvertently intruding onto the site after closure. In addition, uncertainty and sensitivity analyses were performed. The results of the analyses indicate compliance with established radiological criteria and provide reasonable assurance that public health and safety will be protected.« less

  18. A comparative study on the environmental impact of supermarket refrigeration systems using low GWP refrigerants

    DOE PAGES

    Beshr, M.; Aute, V.; Sharma, V.; ...

    2015-04-09

    Supermarket refrigeration systems have high environmental impact due to their large refrigerant charge and high leak rates. Consequently, the interest in using low GWP refrigerants such as carbon dioxide (CO 2) and new refrigerant blends is increasing. In this study, an open-source Life Cycle Climate Performance (LCCP) framework is presented and used to compare the environmental impact of four supermarket refrigeration systems: a transcritical CO 2 booster system, a cascade CO 2/N-40 system, a combined secondary circuit with central DX N-40/L-40 system, and a baseline multiplex direct expansion system utilizing R-404A and N-40. The study is performed for different climatesmore » within the USA using EnergyPlus to simulate the systems' hourly performance. Finally, further analyses are presented such as parametric, sensitivity, and uncertainty analyses to study the impact of different system parameters on the LCCP.« less

  19. Personality and attention: Levels of neuroticism and extraversion can predict attentional performance during a change detection task.

    PubMed

    Hahn, Sowon; Buttaccio, Daniel R; Hahn, Jungwon; Lee, Taehun

    2015-01-01

    The present study demonstrates that levels of extraversion and neuroticism can predict attentional performance during a change detection task. After completing a change detection task built on the flicker paradigm, participants were assessed for personality traits using the Revised Eysenck Personality Questionnaire (EPQ-R). Multiple regression analyses revealed that higher levels of extraversion predict increased change detection accuracies, while higher levels of neuroticism predict decreased change detection accuracies. In addition, neurotic individuals exhibited decreased sensitivity A' and increased fixation dwell times. Hierarchical regression analyses further revealed that eye movement measures mediate the relationship between neuroticism and change detection accuracies. Based on the current results, we propose that neuroticism is associated with decreased attentional control over the visual field, presumably due to decreased attentional disengagement. Extraversion can predict increased attentional performance, but the effect is smaller than the relationship between neuroticism and attention.

  20. Personal use of hair dyes and the risk of bladder cancer: results of a meta-analysis.

    PubMed Central

    Huncharek, Michael; Kupelnick, Bruce

    2005-01-01

    OBJECTIVE: This study examined the methodology of observational studies that explored an association between personal use of hair dye products and the risk of bladder cancer. METHODS: Data were pooled from epidemiological studies using a general variance-based meta-analytic method that employed confidence intervals. The outcome of interest was a summary relative risk (RRs) reflecting the risk of bladder cancer development associated with use of hair dye products vs. non-use. Sensitivity analyses were performed to explain any observed statistical heterogeneity and to explore the influence of specific study characteristics of the summary estimate of effect. RESULTS: Initially combining homogenous data from six case-control and one cohort study yielded a non-significant RR of 1.01 (0.92, 1.11), suggesting no association between hair dye use and bladder cancer development. Sensitivity analyses examining the influence of hair dye type, color, and study design on this suspected association showed that uncontrolled confounding and design limitations contributed to a spurious non-significant summary RR. The sensitivity analyses yielded statistically significant RRs ranging from 1.22 (1.11, 1.51) to 1.50 (1.30, 1.98), indicating that personal use of hair dye products increases bladder cancer risk by 22% to 50% vs. non-use. CONCLUSION: The available epidemiological data suggest an association between personal use of hair dye products and increased risk of bladder cancer. PMID:15736329

  1. Linear regression metamodeling as a tool to summarize and present simulation model results.

    PubMed

    Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M

    2013-10-01

    Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.

  2. Exotic decays of the 125 GeV Higgs boson

    DOE PAGES

    Curtin, David; Essig, Rouven; Gori, Stefania; ...

    2014-10-13

    We perform an extensive survey of nonstandard Higgs decays that are consistent with the 125 GeV Higgs-like resonance. Our aim is to motivate a large set of new experimental analyses on the existing and forthcoming data from the Large Hadron Collider (LHC). The explicit search for exotic Higgs decays presents a largely untapped discovery opportunity for the LHC collaborations, as such decays may be easily missed by other searches. We emphasize that the Higgs is uniquely sensitive to the potential existence of new weakly coupled particles and provide a unified discussion of a large class of both simplified and completemore » models that give rise to characteristic patterns of exotic Higgs decays. We assess the status of exotic Higgs decays after LHC run I. In many cases we are able to set new nontrivial constraints by reinterpreting existing experimental analyses. We point out that improvements are possible with dedicated analyses and perform some preliminary collider studies. As a result, we prioritize the analyses according to their theoretical motivation and their experimental feasibility.« less

  3. A Cognition Analysis of QUASAR's Mathematics Performance Assessment Tasks and Their Sensitivity to Measuring Changes in Middle School Students' Thinking and Reasoning.

    ERIC Educational Resources Information Center

    Cai, Jinfa, And Others

    1996-01-01

    Presents a conceptual framework for analyzing students' mathematical understanding, reasoning, problem solving, and communication. Analyses of student responses indicated that the tasks appear to measure the complex thinking and reasoning processes that they were designed to assess. Concludes that the QUASAR assessment tasks can capture changes in…

  4. Review of nutritional screening and assessment tools and clinical outcomes in heart failure.

    PubMed

    Lin, Hong; Zhang, Haifeng; Lin, Zheng; Li, Xinli; Kong, Xiangqin; Sun, Gouzhen

    2016-09-01

    Recent studies have suggested that undernutrition as defined using multidimensional nutritional evaluation tools may affect clinical outcomes in heart failure (HF). The evidence supporting this correlation is unclear. Therefore, we conducted this systematic review to critically appraise the use of multidimensional evaluation tools in the prediction of clinical outcomes in HF. We performed descriptive analyses of all identified articles involving qualitative analyses. We used STATA to conduct meta-analyses when at least three studies that tested the same type of nutritional assessment or screening tools and used the same outcome were identified. Sensitivity analyses were conducted to validate our positive results. We identified 17 articles with qualitative analyses and 11 with quantitative analysis after comprehensive literature searching and screening. We determined that the prevalence of malnutrition is high in HF (range 16-90 %), particularly in advanced and acute decompensated HF (approximate range 75-90 %). Undernutrition as identified by multidimensional evaluation tools may be significantly associated with hospitalization, length of stay and complications and is particularly strongly associated with high mortality. The meta-analysis revealed that compared with other tools, Mini Nutritional Assessment (MNA) scores were the strongest predictors of mortality in HF [HR (4.32, 95 % CI 2.30-8.11)]. Our results remained reliable after conducting sensitivity analyses. The prevalence of malnutrition is high in HF, particularly in advanced and acute decompensated HF. Moreover, undernutrition as identified by multidimensional evaluation tools is significantly associated with unfavourable prognoses and high mortality in HF.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bozoki, G.E.; Fitzpatrick, R.G.; Bohn, M.P.

    This report details the review of the Diablo Canyon Probabilistic Risk Assessment (DCPRA). The study was performed under contract from the Probabilistic Risk Analysis Branch, Office of Nuclear Reactor Research, USNRC by Brookhaven National Laboratory. The DCPRA is a full scope Level I effort and although the review touched on all aspects of the PRA, the internal events and seismic events received the vast majority of the review effort. The report includes a number of independent systems analyses sensitivity studies, importance analyses as well as conclusions on the adequacy of the DCPRA for use in the Diablo Canyon Long Termmore » Seismic Program.« less

  6. Sobol‧ sensitivity analysis of NAPL-contaminated aquifer remediation process based on multiple surrogates

    NASA Astrophysics Data System (ADS)

    Luo, Jiannan; Lu, Wenxi

    2014-06-01

    Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.

  7. H2Mab-77 is a Sensitive and Specific Anti-HER2 Monoclonal Antibody Against Breast Cancer.

    PubMed

    Itai, Shunsuke; Fujii, Yuki; Kaneko, Mika K; Yamada, Shinji; Nakamura, Takuro; Yanaka, Miyuki; Saidoh, Noriko; Chang, Yao-Wen; Handa, Saori; Takahashi, Maki; Suzuki, Hiroyoshi; Harada, Hiroyuki; Kato, Yukinari

    2017-08-01

    Human epidermal growth factor receptor 2 (HER2) plays a critical role in the progression of breast cancers, and HER2 overexpression is associated with poor clinical outcomes. Trastuzumab is an anti-HER2 humanized antibody that leads to significant survival benefits in patients with HER2-positive metastatic breast cancers. In this study, we developed novel anti-HER2 monoclonal antibodies (mAbs) and characterized their efficacy in flow cytometry, Western blot, and immunohistochemical analyses. Initially, we expressed the full length or ectodomain of HER2 in LN229 glioblastoma cells and then immunized mice with ectodomain of HER2 or LN229/HER2, and performed the first screening by enzyme-linked immunosorbent assays using ectodomain of HER2. Subsequently, we selected mAbs according to their efficacy in flow cytometry (second screening), Western blot (third screening), and immunohistochemical analyses (fourth screening). Among 100 mAb clones, only three mAbs reacted with HER2 in Western blot, and clone H 2 Mab-77 (IgG 1 , kappa) was selected. Finally, immunohistochemical analyses with H 2 Mab-77 showed sensitive and specific reactions against breast cancer cells, warranting the use of H 2 Mab-77 to detect HER2 in pathological analyses of breast cancers.

  8. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    PubMed Central

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  9. Groundwater contamination from waste management sites: The interaction between risk-based engineering design and regulatory policy: 2. Results

    NASA Astrophysics Data System (ADS)

    Massmann, Joel; Freeze, R. Allan

    1987-02-01

    The risk-cost-benefit analysis developed in the companion paper (J. Massmann and R. A. Freeze, this issue) is here applied to (1) an assessment of the relative worth of containment-construction activities, site-exploration activities, and monitoring activities as components of a design strategy for the owner/operator of a waste management facility; (2) an assessment of alternative policy options available to a regulatory agency; and (3) a case history. Sensitivity analyses designed to address the first issue show that the allocation of resources by the owner/operator is sensitive to the stochastic parameters used to describe the hydraulic conductivity field at a site. For the cases analyzed, the installation of a dense monitoring network is of less value to the owner/operator than a more conservative containment design. Sensitivity analyses designed to address the second issue suggest that from a regulatory perspective, design standards should be more effective than performance standards in reducing risk, and design specifications on the containment structure should be more effective than those on the monitoring network. Performance bonds posted before construction have a greater potential to influence design than prospective penalties to be imposed at the time of failure. Siting on low-conductivity deposits is a more effective method of risk reduction than any form of regulatory influence. Results of the case history indicate that the methodology can be successfully applied at field sites.

  10. Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.

    2007-01-01

    To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.

  11. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  12. Analyses of a heterogeneous lattice hydrodynamic model with low and high-sensitivity vehicles

    NASA Astrophysics Data System (ADS)

    Kaur, Ramanpreet; Sharma, Sapna

    2018-06-01

    Basic lattice model is extended to study the heterogeneous traffic by considering the optimal current difference effect on a unidirectional single lane highway. Heterogeneous traffic consisting of low- and high-sensitivity vehicles is modeled and their impact on stability of mixed traffic flow has been examined through linear stability analysis. The stability of flow is investigated in five distinct regions of the neutral stability diagram corresponding to the amount of higher sensitivity vehicles present on road. In order to investigate the propagating behavior of density waves non linear analysis is performed and near the critical point, the kink antikink soliton is obtained by driving mKdV equation. The effect of fraction parameter corresponding to high sensitivity vehicles is investigated and the results indicates that the stability rise up due to the fraction parameter. The theoretical findings are verified via direct numerical simulation.

  13. Age-related changes in perception of movement in driving scenes.

    PubMed

    Lacherez, Philippe; Turner, Laura; Lester, Robert; Burns, Zoe; Wood, Joanne M

    2014-07-01

    Age-related changes in motion sensitivity have been found to relate to reductions in various indices of driving performance and safety. The aim of this study was to investigate the basis of this relationship in terms of determining which aspects of motion perception are most relevant to driving. Participants included 61 regular drivers (age range 22-87 years). Visual performance was measured binocularly. Measures included visual acuity, contrast sensitivity and motion sensitivity assessed using four different approaches: (1) threshold minimum drift rate for a drifting Gabor patch, (2) Dmin from a random dot display, (3) threshold coherence from a random dot display, and (4) threshold drift rate for a second-order (contrast modulated) sinusoidal grating. Participants then completed the Hazard Perception Test (HPT) in which they were required to identify moving hazards in videos of real driving scenes, and also a Direction of Heading task (DOH) in which they identified deviations from normal lane keeping in brief videos of driving filmed from the interior of a vehicle. In bivariate correlation analyses, all motion sensitivity measures significantly declined with age. Motion coherence thresholds, and minimum drift rate threshold for the first-order stimulus (Gabor patch) both significantly predicted HPT performance even after controlling for age, visual acuity and contrast sensitivity. Bootstrap mediation analysis showed that individual differences in DOH accuracy partly explained these relationships, where those individuals with poorer motion sensitivity on the coherence and Gabor tests showed decreased ability to perceive deviations in motion in the driving videos, which related in turn to their ability to detect the moving hazards. The ability to detect subtle movements in the driving environment (as determined by the DOH task) may be an important contributor to effective hazard perception, and is associated with age, and an individuals' performance on tests of motion sensitivity. The locus of the processing deficits appears to lie in first-order, rather than second-order motion pathways. © 2014 The Authors Ophthalmic & Physiological Optics © 2014 The College of Optometrists.

  14. EFFECTIVE INDICES FOR MONITORING MENTAL WORKLOAD WHILE PERFORMING MULTIPLE TASKS.

    PubMed

    Hsu, Bin-Wei; Wang, Mao-Jiun J; Chen, Chi-Yuan; Chen, Fang

    2015-08-01

    This study identified several physiological indices that can accurately monitor mental workload while participants performed multiple tasks with the strategy of maintaining stable performance and maximizing accuracy. Thirty male participants completed three 10-min. simulated multitasks: MATB (Multi-Attribute Task Battery) with three workload levels. Twenty-five commonly used mental workload measures were collected, including heart rate, 12 HRV (heart rate variability), 10 EEG (electroencephalography) indices (α, β, θ, α/θ, θ/β from O1-O2 and F4-C4), and two subjective measures. Analyses of index sensitivity showed that two EEG indices, θ and α/θ (F4-C4), one time-domain HRV-SDNN (standard deviation of inter-beat intervals), and four frequency-domain HRV: VLF (very low frequency), LF (low frequency), %HF (percentage of high frequency), and LF/HF were sensitive to differentiate high workload. EEG α/θ (F4-C4) and LF/HF were most effective for monitoring high mental workload. LF/HF showed the highest correlations with other physiological indices. EEG α/θ (F4-C4) showed strong correlations with subjective measures across different mental workload levels. Operation strategy would affect the sensitivity of EEG α (F4-C4) and HF.

  15. High-sensitivity detection of breast tumors in vivo by use of a pH-sensitive near-infrared fluorescence probe

    NASA Astrophysics Data System (ADS)

    Mathejczyk, Julia Eva; Pauli, Jutta; Dullin, Christian; Resch-Genger, Ute; Alves, Frauke; Napp, Joanna

    2012-07-01

    We investigated the potential of the pH-sensitive dye, CypHer5E, conjugated to Herceptin (pH-Her) for the sensitive detection of breast tumors in mice using noninvasive time-domain near-infrared fluorescence imaging and different methods of data analysis. First, the fluorescence properties of pH-Her were analyzed as function of pH and/or dye-to-protein ratio, and binding specificity was confirmed in cell-based assays. Subsequently, the performance of pH-Her in nude mice bearing orthotopic HER2-positive (KPL-4) and HER2-negative (MDA-MB-231) breast carcinoma xenografts was compared to that of an always-on fluorescent conjugate Alexa Fluor 647-Herceptin (Alexa-Her). Subtraction of autofluorescence and lifetime (LT)-gated image analyses were performed for background fluorescence suppression. In mice bearing HER2-positive tumors, autofluorescence subtraction together with the selective fluorescence enhancement of pH-Her solely in the tumor's acidic environment provided high contrast-to-noise ratios (CNRs). This led to an improved sensitivity of tumor detection compared to Alexa-Her. In contrast, LT-gated imaging using LTs determined in model systems did not improve tumor-detection sensitivity in vivo for either probe. In conclusion, pH-Her is suitable for sensitive in vivo monitoring of HER2-expressing breast tumors with imaging in the intensity domain and represents a promising tool for detection of weak fluorescent signals deriving from small tumors or metastases.

  16. A Cost-Minimization Analysis of Tissue-Engineered Constructs for Corneal Endothelial Transplantation

    PubMed Central

    Tan, Tien-En; Peh, Gary S. L.; George, Benjamin L.; Cajucom-Uy, Howard Y.; Dong, Di; Finkelstein, Eric A.; Mehta, Jodhbir S.

    2014-01-01

    Corneal endothelial transplantation or endothelial keratoplasty has become the preferred choice of transplantation for patients with corneal blindness due to endothelial dysfunction. Currently, there is a worldwide shortage of transplantable tissue, and demand is expected to increase further with aging populations. Tissue-engineered alternatives are being developed, and are likely to be available soon. However, the cost of these constructs may impair their widespread use. A cost-minimization analysis comparing tissue-engineered constructs to donor tissue procured from eye banks for endothelial keratoplasty was performed. Both initial investment costs and recurring costs were considered in the analysis to arrive at a final tissue cost per transplant. The clinical outcomes of endothelial keratoplasty with tissue-engineered constructs and with donor tissue procured from eye banks were assumed to be equivalent. One-way and probabilistic sensitivity analyses were performed to simulate various possible scenarios, and to determine the robustness of the results. A tissue engineering strategy was cheaper in both investment cost and recurring cost. Tissue-engineered constructs for endothelial keratoplasty could be produced at a cost of US$880 per transplant. In contrast, utilizing donor tissue procured from eye banks for endothelial keratoplasty required US$3,710 per transplant. Sensitivity analyses performed further support the results of this cost-minimization analysis across a wide range of possible scenarios. The use of tissue-engineered constructs for endothelial keratoplasty could potentially increase the supply of transplantable tissue and bring the costs of corneal endothelial transplantation down, making this intervention accessible to a larger group of patients. Tissue-engineering strategies for corneal epithelial constructs or other tissue types, such as pancreatic islet cells, should also be subject to similar pharmacoeconomic analyses. PMID:24949869

  17. A cost-minimization analysis of tissue-engineered constructs for corneal endothelial transplantation.

    PubMed

    Tan, Tien-En; Peh, Gary S L; George, Benjamin L; Cajucom-Uy, Howard Y; Dong, Di; Finkelstein, Eric A; Mehta, Jodhbir S

    2014-01-01

    Corneal endothelial transplantation or endothelial keratoplasty has become the preferred choice of transplantation for patients with corneal blindness due to endothelial dysfunction. Currently, there is a worldwide shortage of transplantable tissue, and demand is expected to increase further with aging populations. Tissue-engineered alternatives are being developed, and are likely to be available soon. However, the cost of these constructs may impair their widespread use. A cost-minimization analysis comparing tissue-engineered constructs to donor tissue procured from eye banks for endothelial keratoplasty was performed. Both initial investment costs and recurring costs were considered in the analysis to arrive at a final tissue cost per transplant. The clinical outcomes of endothelial keratoplasty with tissue-engineered constructs and with donor tissue procured from eye banks were assumed to be equivalent. One-way and probabilistic sensitivity analyses were performed to simulate various possible scenarios, and to determine the robustness of the results. A tissue engineering strategy was cheaper in both investment cost and recurring cost. Tissue-engineered constructs for endothelial keratoplasty could be produced at a cost of US$880 per transplant. In contrast, utilizing donor tissue procured from eye banks for endothelial keratoplasty required US$3,710 per transplant. Sensitivity analyses performed further support the results of this cost-minimization analysis across a wide range of possible scenarios. The use of tissue-engineered constructs for endothelial keratoplasty could potentially increase the supply of transplantable tissue and bring the costs of corneal endothelial transplantation down, making this intervention accessible to a larger group of patients. Tissue-engineering strategies for corneal epithelial constructs or other tissue types, such as pancreatic islet cells, should also be subject to similar pharmacoeconomic analyses.

  18. Health economic evaluation of Human Papillomavirus vaccines in women from Venezuela by a lifetime Markov cohort model.

    PubMed

    Bardach, Ariel Esteban; Garay, Osvaldo Ulises; Calderón, María; Pichón-Riviére, Andrés; Augustovski, Federico; Martí, Sebastián García; Cortiñas, Paula; Gonzalez, Marino; Naranjo, Laura T; Gomez, Jorge Alberto; Caporale, Joaquín Enzo

    2017-02-02

    Cervical cancer (CC) and genital warts (GW) are a significant public health issue in Venezuela. Our objective was to assess the cost-effectiveness of the two available vaccines, bivalent and quadrivalent, against Human Papillomavirus (HPV) in Venezuelan girls in order to inform decision-makers. A previously published Markov cohort model, informed by the best available evidence, was adapted to the Venezuelan context to evaluate the effects of vaccination on health and healthcare costs from the perspective of the healthcare payer in an 11-year-old girls cohort of 264,489. Costs and quality-adjusted life years (QALYs) were discounted at 5%. Eight scenarios were analyzed to depict the cost-effectiveness under alternative vaccine prices, exchange rates and dosing schemes. Deterministic and probabilistic sensitivity analyses were performed. Compared to screening only, the bivalent and quadrivalent vaccines were cost-saving in all scenarios, avoiding 2,310 and 2,143 deaths, 4,781 and 4,431 CCs up to 18,459 GW for the quadrivalent vaccine and gaining 4,486 and 4,395 discounted QALYs respectively. For both vaccines, the main determinants of variations in the incremental costs-effectiveness ratio after running deterministic and probabilistic sensitivity analyses were transition probabilities, vaccine and cancer-treatment costs and HPV 16 and 18 distribution in CC cases. When comparing vaccines, none of them was consistently more cost-effective than the other. In sensitivity analyses, for these comparisons, the main determinants were GW incidence, the level of cross-protection and, for some scenarios, vaccines costs. Immunization with the bivalent or quadrivalent HPV vaccines showed to be cost-saving or cost-effective in Venezuela, falling below the threshold of one Gross Domestic Product (GDP) per capita (104,404 VEF) per QALY gained. Deterministic and probabilistic sensitivity analyses confirmed the robustness of these results.

  19. Does McNemar's test compare the sensitivities and specificities of two diagnostic tests?

    PubMed

    Kim, Soeun; Lee, Woojoo

    2017-02-01

    McNemar's test is often used in practice to compare the sensitivities and specificities for the evaluation of two diagnostic tests. For correct evaluation of accuracy, an intuitive recommendation is to test the diseased and the non-diseased groups separately so that the sensitivities can be compared among the diseased, and specificities can be compared among the healthy group of people. This paper provides a rigorous theoretical framework for this argument and study the validity of McNemar's test regardless of the conditional independence assumption. We derive McNemar's test statistic under the null hypothesis considering both assumptions of conditional independence and conditional dependence. We then perform power analyses to show how the result is affected by the amount of the conditional dependence under alternative hypothesis.

  20. Cost-effectiveness of minimally invasive sacroiliac joint fusion.

    PubMed

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W

    2016-01-01

    Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.

  1. Cost-effectiveness of drug-eluting stents versus bare-metal stents in patients undergoing percutaneous coronary intervention.

    PubMed

    Baschet, Louise; Bourguignon, Sandrine; Marque, Sébastien; Durand-Zaleski, Isabelle; Teiger, Emmanuel; Wilquin, Fanny; Levesque, Karine

    2016-01-01

    To determine the cost-effectiveness of drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients requiring a percutaneous coronary intervention in France, using a recent meta-analysis including second-generation DES. A cost-effectiveness analysis was performed in the French National Health Insurance setting. Effectiveness settings were taken from a meta-analysis of 117 762 patient-years with 76 randomised trials. The main effectiveness criterion was major cardiac event-free survival. Effectiveness and costs were modelled over a 5-year horizon using a three-state Markov model. Incremental cost-effectiveness ratios and a cost-effectiveness acceptability curve were calculated for a range of thresholds for willingness to pay per year without major cardiac event gain. Deterministic and probabilistic sensitivity analyses were performed. Base case results demonstrated that DES are dominant over BMS, with an increase in event-free survival and a cost-reduction of €184, primarily due to a diminution of second revascularisations, and an absence of myocardial infarction and stent thrombosis. These results are robust for uncertainty on one-way deterministic and probabilistic sensitivity analyses. Using a cost-effectiveness threshold of €7000 per major cardiac event-free year gained, DES has a >95% probability of being cost-effective versus BMS. Following DES price decrease, new-generation DES development and taking into account recent meta-analyses results, the DES can now be considered cost-effective regardless of selective indication in France, according to European recommendations.

  2. Cost-effectiveness of minimally invasive sacroiliac joint fusion

    PubMed Central

    Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W

    2016-01-01

    Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717

  3. The Child Adolescent Bullying Scale (CABS): Psychometric evaluation of a new measure.

    PubMed

    Strout, Tania D; Vessey, Judith A; DiFazio, Rachel L; Ludlow, Larry H

    2018-06-01

    While youth bullying is a significant public health problem, healthcare providers have been limited in their ability to identify bullied youths due to the lack of a reliable, and valid instrument appropriate for use in clinical settings. We conducted a multisite study to evaluate the psychometric properties of a new 22-item instrument for assessing youths' experiences of being bullied, the Child Adolescent Bullying Scale (CABS). The 20 items summed to produce the measure's score were evaluated here. Diagnostic performance was assessed through evaluation of sensitivity, specificity, predictive values, and area under receiver operating characteristic (AUROC) curve. A sample of 352 youths from diverse racial, ethnic, and geographic backgrounds (188 female, 159 male, 5 transgender, sample mean age 13.5 years) were recruited from two clinical sites. Participants completed the CABS and existing youth bullying measures. Analyses grounded in classical test theory, including assessments of reliability and validity, item analyses, and principal components analysis, were conducted. The diagnostic performance and test characteristics of the CABS were also evaluated. The CABS is comprised of one component, accounting for 67% of observed variance. Analyses established evidence of internal consistency reliability (Cronbach's α = 0.97), construct and convergent validity. Sensitivity was 84%, specificity was 65%, and the AUROC curve was 0.74 (95% CI: 0.69-0.80). Findings suggest that the CABS holds promise as a reliable, valid tool for healthcare provider use in screening for bullying exposure in the clinical setting. © 2018 Wiley Periodicals, Inc.

  4. Development of the multiple sclerosis (MS) early mobility impairment questionnaire (EMIQ).

    PubMed

    Ziemssen, Tjalf; Phillips, Glenn; Shah, Ruchit; Mathias, Adam; Foley, Catherine; Coon, Cheryl; Sen, Rohini; Lee, Andrew; Agarwal, Sonalee

    2016-10-01

    The Early Mobility Impairment Questionnaire (EMIQ) was developed to facilitate early identification of mobility impairments in multiple sclerosis (MS) patients. We describe the initial development of the EMIQ with a focus on the psychometric evaluation of the questionnaire using classical and item response theory methods. The initial 20-item EMIQ was constructed by clinical specialists and qualitatively tested among people with MS and physicians via cognitive interviews. Data from an observational study was used to make additional updates to the instrument based on exploratory factor analysis (EFA) and item response theory (IRT) analysis, and psychometric analyses were performed to evaluate the reliability and validity of the final instrument's scores and screening properties (i.e., sensitivity and specificity). Based on qualitative interview analyses, a revised 15-item EMIQ was included in the observational study. EFA, IRT and item-to-item correlation analyses revealed redundant items which were removed leading to the final nine-item EMIQ. The nine-item EMIQ performed well with respect to: test-retest reliability (ICC = 0.858); internal consistency (α = 0.893); convergent validity; and known-groups methods for construct validity. A cut-point of 41 on the 0-to-100 scale resulted in sufficient sensitivity and specificity statistics for viably identifying patients with mobility impairment. The EMIQ is a content valid and psychometrically sound instrument for capturing MS patients' experience with mobility impairments in a clinical practice setting. Additional research is suggested to further confirm the EMIQ's screening properties over time.

  5. How accurately does the Brief Job Stress Questionnaire identify workers with or without potential psychological distress?

    PubMed

    Tsutsumi, Akizumi; Inoue, Akiomi; Eguchi, Hisashi

    2017-07-27

    The manual for the Japanese Stress Check Program recommends use of the Brief Job Stress Questionnaire (BJSQ) from among the program's instruments and proposes criteria for defining "high-stress" workers. This study aimed to examine how accurately the BJSQ identifies workers with or without potential psychological distress. We used an online survey to administer the BJSQ with a psychological distress scale (K6) to randomly selected workers (n=1,650). We conducted receiver operating characteristics curve analyses to estimate the screening performance of the cutoff points that the Stress Check Program manual recommends for the BJSQ. Prevalence of workers with potential psychological distress defined as K6 score ≥13 was 13%. Prevalence of "high-risk" workers defined using criteria recommended by the program manual was 16.7% for the original version of the BJSQ. The estimated values were as follows: sensitivity, 60.5%; specificity, 88.9%; Youden index, 0.504; positive predictive value, 47.3%; negative predictive value, 93.8%; positive likelihood ratio, 6.0; and negative likelihood ratio, 0.4. Analyses based on the simplified BJSQ indicated lower sensitivity compared with the original version, although we expected roughly the same screening performance for the best scenario using the original version. Our analyses in which psychological distress measured by K6 was set as the target condition indicate less than half of the identified "high-stress" workers warrant consideration for secondary screening for psychological distress.

  6. Resistance profiles to antimicrobial agents in bacteria isolated from acute endodontic infections: systematic review and meta-analysis.

    PubMed

    Lang, Pauline M; Jacinto, Rogério C; Dal Pizzol, Tatiane S; Ferreira, Maria Beatriz C; Montagner, Francisco

    2016-11-01

    Infected root canal or acute apical abscess exudates can harbour several species, including Fusobacterium, Porphyromonas, Prevotella, Parvimonas, Streptococcus, Treponema, Olsenella and not-yet cultivable species. A systematic review and meta-analysis was performed to assess resistance rates to antimicrobial agents in clinical studies that isolated bacteria from acute endodontic infections. Electronic databases and the grey literature were searched up to May 2015. Clinical studies in humans evaluating the antimicrobial resistance of primary acute endodontic infection isolates were included. PRISMA guidelines were followed. A random-effect meta-analysis was employed. The outcome was described as the pooled resistance rates for each antimicrobial agent. Heterogeneity and sensitivity analyses were performed. Subgroup analyses were conducted based upon report or not of the use of antibiotics prior to sampling as an exclusion factor (subgroups A and B, respectively). Data from seven studies were extracted. Resistance rates for 15 different antimicrobial agents were evaluated (range, 3.5-40.0%). Lower resistance rates were observed for amoxicillin/clavulanic acid and amoxicillin; higher resistance rates were detected for tetracycline. Resistance rates varied according to previous use of an antimicrobial agent as demonstrated by the subgroup analyses. Heterogeneity was observed for the resistance profiles of penicillin G in subgroup A and for amoxicillin, clindamycin, metronidazole and tetracycline in subgroup B. Sensitivity analyses demonstrated that resistance rates changed for metronidazole, clindamycin, tetracycline and amoxicillin. These findings suggest that clinical isolates had low resistance to β-lactams. Further well-designed studies are needed to clarify whether the differences in susceptibility among the antimicrobial agents may influence clinical responses to treatment. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.

  7. The use of selective serotonin receptor inhibitors (SSRIs) is not associated with increased risk of endoscopy-refractory bleeding, rebleeding or mortality in peptic ulcer bleeding.

    PubMed

    Laursen, S B; Leontiadis, G I; Stanley, A J; Hallas, J; Schaffalitzky de Muckadell, O B

    2017-08-01

    Observational studies have consistently shown an increased risk of upper gastrointestinal bleeding in users of selective serotonin receptor inhibitors (SSRIs), probably explained by their inhibition of platelet aggregation. Therefore, treatment with SSRIs is often temporarily withheld in patients with peptic ulcer bleeding. However, abrupt discontinuation of SSRIs is associated with development of withdrawal symptoms in one-third of patients. Further data are needed to clarify whether treatment with SSRIs is associated with poor outcomes, which would support temporary discontinuation of treatment. To identify if treatment with SSRIs is associated with increased risk of: (1) endoscopy-refractory bleeding, (2) rebleeding or (3) 30-day mortality due to peptic ulcer bleeding. A nationwide cohort study. Analyses were performed on prospectively collected data on consecutive patients admitted to hospital with peptic ulcer bleeding in Denmark in the period 2006-2014. Logistic regression analyses were used to investigate the association between treatment with SSRIs and outcome following adjustment for pre-defined confounders. Sensitivity and subgroup analyses were performed to evaluate the validity of the findings. A total of 14 343 patients were included. Following adjustment, treatment with SSRIs was not associated with increased risk of endoscopy-refractory bleeding (odds ratio [OR] [95% Confidence Interval (CI)]: 1.03 [0.79-1.33]), rebleeding (OR [95% CI]: 0.96 [0.83-1.11]) or 30-day mortality (OR [95% CI]: 1.01 [0.85-1.19]. These findings were supported by sensitivity and subgroup analyses. According to our data, treatment with SSRIs does not influence the risk of endoscopy-refractory bleeding, rebleeding or 30-day mortality in peptic ulcer bleeding. © 2017 John Wiley & Sons Ltd.

  8. Methodological issues in assessing changes in costs pre- and post-medication switch: a schizophrenia study example.

    PubMed

    Faries, Douglas E; Nyhuis, Allen W; Ascher-Svanum, Haya

    2009-05-27

    Schizophrenia is a severe, chronic, and costly illness that adversely impacts patients' lives and health care payer budgets. Cost comparisons of treatment regimens are, therefore, important to health care payers and researchers. Pre-Post analyses ("mirror-image"), where outcomes prior to a medication switch are compared to outcomes post-switch, are commonly used in such research. However, medication changes often occur during a costly crisis event. Patients may relapse, be hospitalized, have a medication change, and then spend a period of time with intense use of costly resources (post-medication switch). While many advantages and disadvantages of Pre-Post methodology have been discussed, issues regarding the attributability of costs incurred around the time of medication switching have not been fully investigated. Medical resource use data, including medications and acute-care services (hospitalizations, partial hospitalizations, emergency department) were collected for patients with schizophrenia who switched antipsychotics (n = 105) during a 1-year randomized, naturalistic, antipsychotic cost-effectiveness schizophrenia trial. Within-patient changes in total costs per day were computed during the pre- and post-medication change periods. In addition to the standard Pre-Post analysis comparing costs pre- and post-medication change, we investigated the sensitivity of results to varying assumptions regarding the attributability of acute care service costs occurring just after a medication switch that were likely due to initial medication failure. Fifty-six percent of all costs incurred during the first week on the newly initiated antipsychotic were likely due to treatment failure with the previous antipsychotic. Standard analyses suggested an average increase in cost-per-day for each patient of $2.40 after switching medications. However, sensitivity analyses removing costs incurred post-switch that were potentially due to the failure of the initial medication suggested decreases in costs in the range of $4.77 to $9.69 per day post-switch. Pre-Post cost analyses are sensitive to the approach used to handle acute-service costs occurring just after a medication change. Given the importance of quality economic research on the cost of switching treatments, thorough sensitivity analyses should be performed to identify the impact of crisis events around the time of medication change.

  9. Texture analysis of pulmonary parenchymateous changes related to pulmonary thromboembolism in dogs - a novel approach using quantitative methods.

    PubMed

    Marschner, C B; Kokla, M; Amigo, J M; Rozanski, E A; Wiinberg, B; McEvoy, F J

    2017-07-11

    Diagnosis of pulmonary thromboembolism (PTE) in dogs relies on computed tomography pulmonary angiography (CTPA), but detailed interpretation of CTPA images is demanding for the radiologist and only large vessels may be evaluated. New approaches for better detection of smaller thrombi include dual energy computed tomography (DECT) as well as computer assisted diagnosis (CAD) techniques. The purpose of this study was to investigate the performance of quantitative texture analysis for detecting dogs with PTE using grey-level co-occurrence matrices (GLCM) and multivariate statistical classification analyses. CT images from healthy (n = 6) and diseased (n = 29) dogs with and without PTE confirmed on CTPA were segmented so that only tissue with CT numbers between -1024 and -250 Houndsfield Units (HU) was preserved. GLCM analysis and subsequent multivariate classification analyses were performed on texture parameters extracted from these images. Leave-one-dog-out cross validation and receiver operator characteristic (ROC) showed that the models generated from the texture analysis were able to predict healthy dogs with optimal levels of performance. Partial Least Square Discriminant Analysis (PLS-DA) obtained a sensitivity of 94% and a specificity of 96%, while Support Vector Machines (SVM) yielded a sensitivity of 99% and a specificity of 100%. The models, however, performed worse in classifying the type of disease in the diseased dog group: In diseased dogs with PTE sensitivities were 30% (PLS-DA) and 38% (SVM), and specificities were 80% (PLS-DA) and 89% (SVM). In diseased dogs without PTE the sensitivities of the models were 59% (PLS-DA) and 79% (SVM) and specificities were 79% (PLS-DA) and 82% (SVM). The results indicate that texture analysis of CTPA images using GLCM is an effective tool for distinguishing healthy from abnormal lung. Furthermore the texture of pulmonary parenchyma in dogs with PTE is altered, when compared to the texture of pulmonary parenchyma of healthy dogs. The models' poorer performance in classifying dogs within the diseased group, may be related to the low number of dogs compared to texture variables, a lack of balanced number of dogs within each group or a real lack of difference in the texture features among the diseased dogs.

  10. On non-symmetric axial corner-layer flow

    NASA Astrophysics Data System (ADS)

    Boiko, A. V.; Kirilovskiy, S. V.; Nechepurenko, Y. M.; Poplavskaya, T. V.

    2017-10-01

    The problem of asymmetric incompressible axial flow in a corner formed of two intersecting plates at a right angle is considered. The asymptotic behaviour of the flow far away from the corner is analysed. Two types of asymptotic behaviour are found. It is shown that the flow is very sensitive to the asymmetry parameter. A comparison of the results with computations of full Navier-Stokes equations was performed.

  11. Identifying the Cognitive Decrements Caused By HIV

    DTIC Science & Technology

    1994-06-10

    critical analyses 46 pointed to further assessment of frontal lobe structures. Most of the 15 different tests yielded more than on* dependent variable...is also one of the tests included in the Multicenter AIDS Cohort Study longitudinal research on the progression of HIV infection. Left frontal lobe ...structures underlie verbal fluency performance and the particular sensitivity of frontal lobe structures to perturbations with HIV infection would

  12. Cognitive and Neural Bases of Skilled Performance.

    DTIC Science & Technology

    1987-10-04

    advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd

  13. Evaluation of Uncertainty and Sensitivity in Environmental Modeling at a Radioactive Waste Management Site

    NASA Astrophysics Data System (ADS)

    Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.

    2002-05-01

    Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.

  14. Sensitivity and specificity of monochromatic photography of the ocular fundus in differentiating optic nerve head drusen and optic disc oedema: optic disc drusen and oedema.

    PubMed

    Gili, Pablo; Flores-Rodríguez, Patricia; Yangüela, Julio; Orduña-Azcona, Javier; Martín-Ríos, María Dolores

    2013-03-01

    Evaluation of the efficacy of monochromatic photography of the ocular fundus in differentiating optic nerve head drusen (ONHD) and optic disc oedema (ODE). Sixty-six patients with ONHD, 31 patients with ODE and 70 healthy subjects were studied. Colour and monochromatic fundus photography with different filters (green, red and autofluorescence) were performed. The results were analysed blindly by two observers. The sensitivity, specificity and interobserver agreement (k) of each test were assessed. Colour photography offers 65.5 % sensitivity and 100 % specificity for the diagnosis of ONHD. Monochromatic photography improves sensitivity and specificity and provides similar results: green filter (71.20 % sensitivity, 96.70 % specificity), red filter (80.30 % sensitivity, 96.80 % specificity), and autofluorescence technique (87.8 % sensitivity, 100 % specificity). The interobserver agreement was good with all techniques used: autofluorescence (k = 0.957), green filter (k = 0.897), red filter (k = 0.818) and colour (k = 0.809). Monochromatic fundus photography permits ONHD and ODE to be differentiated, with good sensitivity and very high specificity. The best results were obtained with autofluorescence and red filter study.

  15. Effects of Anxiety Sensitivity and Hearing Loss on Tinnitus Symptom Severity

    PubMed Central

    Moon, Kyung Ray; Park, Subin; Jung, YouJi; Lee, AhReum

    2018-01-01

    Objective The aim of the present study was to examine the relative role of anxiety sensitivity and hearing loss on the tinnitus symptoms severity in a large clinical sample of patients with tinnitus. Methods A total of 1,705 patients with tinnitus who visited the tinnitus clinic underwent the pure-tone audiometric testing and a battery of self-report questionnaires. Multiple linear regression analyses were performed to identify the relationship of anxiety sensitivity and hearing loss to tinnitus symptoms severity. Results Both anxiety sensitivity and hearing loss were a significant association with of annoyance (anxiety sensitivity β=0.11, p=0.010; hearing loss β=0.09, p=0.005) and THI score (anxiety sensitivity β=0.21, p<0.001; hearing loss β=0.10, p<0.001) after adjusting for confounding factors. Meanwhile, the awareness time (β=0.19, p<0.001) and loudness (β=0.11, p<0.001) of tinnitus was associated with only the hearing loss but not with anxiety sensitivity. Conclusion Our results indicate that both hearing loss and anxiety sensitivity were associated with increased tinnitus symptom severity. Furthermore, these associations could be different according to the characteristics of tinnitus symptoms. PMID:29422923

  16. Sensitive Amino Acid Composition and Chirality Analysis with the Mars Organic Analyzer (MOA)

    NASA Technical Reports Server (NTRS)

    Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.

    2005-01-01

    Detection of life on Mars requires definition of a suitable biomarker and development of sensitive yet compact instrumentation capable of performing in situ analyses. Our studies are focused on amino acid analysis because amino acids are more resistant to decomposition than other biomolecules, and because amino acid chirality is a well-defined biomarker. Amino acid composition and chirality analysis has been previously demonstrated in the lab using microfabricated capillary electrophoresis (CE) chips. To analyze amino acids in the field, we have developed the Mars Organic Analyzer (MOA), a portable analysis system that consists of a compact instrument and a novel multi-layer CE microchip.

  17. Evaluation of SD BIOLINE H. pylori Ag rapid test against double ELISA with SD H. pylori Ag ELISA and EZ-STEP H. pylori Ag ELISA tests.

    PubMed

    Negash, Markos; Kassu, Afework; Amare, Bemnet; Yismaw, Gizachew; Moges, Beyene

    2018-01-01

    Helicobacter pylori antibody titters fall very slowly even after successful treatment. Therefore, tests detecting H. pylori antibody lack specificity and sensitivity. On the other hand, H. pylori stool antigen tests are reported as an alternative assay because of their reliability and simplicity. However, the comparative performance of H. pylori stool antigen tests for detecting the presence of the bacterium in clinical specimens in the study area is not assessed. Therefore, in this study we evaluated the performance of SD BIOLINE H. pylori Ag rapid test with reference to the commercially available EZ- STEP ELISA and SD BIOLINE H. pylori Ag ELISA tests. Stool samples were collected to analyse the diagnostic performance of SD BIOLINE H. pylori Ag rapid test kit using SD H. pylori Ag ELISA kit and EZ- STEP ELISA tests as a gold standard. Serum samples were also collected from each patient to test for the presence of H. pylori antibodies using dBest H. pylori Test Disk. Sensitivity, specificity, predictive values and kappa value are assessed. P values < 0.05 were taken statistically significant. Stool and serum samples were collected from 201 dyspeptic patients and analysed. The sensitivity, specificity, positive and negative predictive values of the SD BIOLINE H. pylori Ag rapid test were: 95.6% (95% CI, 88.8-98.8), 92.5% (95%CI, 89-94.1%), 86.7% (95% CI, 80.5-89.6), and 97.6% (95% CI, 993.9-99.3) respectively. The performance of SD BIOLINE H. pylori Ag rapid test was better than the currently available antibody test in study area. Therefore, the SD BIOLINE Ag rapid stool test could replace and be used to diagnose active H. pylori infection before the commencement of therapy among dyspeptic patients.

  18. Orbit Transfer Vehicle Engine Study. Phase A, extension 1: Advanced expander cycle engine optimization

    NASA Technical Reports Server (NTRS)

    Mellish, J. A.

    1979-01-01

    The performance optimization of expander cycle engines at vacuum thrust levels of 10K, 15K, and 20K lb is discussed. The optimization is conducted for a maximum engine length with an extendible nozzle in the retracted position of 60 inches and an engine mixture ratio of 6.0:1. The thrust chamber geometry and cycle analyses are documented. In addition, the sensitivity of a recommended baseline expander cycle to component performance variations is determined and chilldown/start propellant consumptions are estimated.

  19. ADC as a useful diagnostic tool for differentiating benign and malignant vertebral bone marrow lesions and compression fractures: a systematic review and meta-analysis.

    PubMed

    Suh, Chong Hyun; Yun, Seong Jong; Jin, Wook; Lee, Sun Hwa; Park, So Young; Ryu, Chang-Woo

    2018-07-01

    To assess the sensitivity and specificity of quantitative assessment of the apparent diffusion coefficient (ADC) for differentiating benign and malignant vertebral bone marrow lesions (BMLs) and compression fractures (CFs) METHODS: An electronic literature search of MEDLINE and EMBASE was conducted. Bivariate modelling and hierarchical summary receiver operating characteristic modelling were performed to evaluate the diagnostic performance of ADC for differentiating vertebral BMLs. Subgroup analysis was performed for differentiating benign and malignant vertebral CFs. Meta-regression analyses according to subject, study and diffusion-weighted imaging (DWI) characteristics were performed. Twelve eligible studies (748 lesions, 661 patients) were included. The ADC exhibited a pooled sensitivity of 0.89 (95% confidence interval [CI] 0.80-0.94) and a pooled specificity of 0.87 (95% CI 0.78-0.93) for differentiating benign and malignant vertebral BMLs. In addition, the pooled sensitivity and specificity for differentiating benign and malignant CFs were 0.92 (95% CI 0.82-0.97) and 0.91 (95% CI 0.87-0.94), respectively. In the meta-regression analysis, the DWI slice thickness was a significant factor affecting heterogeneity (p < 0.01); thinner slice thickness (< 5 mm) showed higher specificity (95%) than thicker slice thickness (81%). Quantitative assessment of ADC is a useful diagnostic tool for differentiating benign and malignant vertebral BMLs and CFs. • Quantitative assessment of ADC is useful in differentiating vertebral BMLs. • Quantitative ADC assessment for BMLs had sensitivity of 89%, specificity of 87%. • Quantitative ADC assessment for CFs had sensitivity of 92%, specificity of 91%. • The specificity is highest (95%) with thinner (< 5 mm) DWI slice thickness.

  20. A miniaturised laser ablation/ionisation analyser for investigation of elemental/isotopic composition with the sub-ppm detection sensitivity

    NASA Astrophysics Data System (ADS)

    Tulej, M.; Riedo, A.; Meyer, S.; Iakovleva, M.; Neuland, M.; Wurz, P.

    2012-04-01

    Detailed knowledge of the elemental and isotopic composition of solar system objects imposes critical constraints on models describing the origin of our solar system and can provide insight to chemical and physical processes taking place during the planetary evolution. So far, the investigation of chemical composition of planetary surfaces could be conducted almost exclusively by remotely controlled spectroscopic instruments from orbiting spacecraft, landers or rovers. With some exceptions, the sensitivity of these techniques is, however, limited and often only abundant elements can be investigated. Nevertheless, the spectroscopic techniques proved to be successful for global chemical mapping of entire planetary objects such as the Moon, Mars and asteroids. A combined afford of the measurements from orbit, landers and rovers can also yield the determination of local mineralogy. New instruments including Laser Induced Breakdown Spectroscopy (LIBS) and Laser Ablation/Ionisation Mass Spectrometer (LIMS), have been recently included for several landed missions. LIBS is thought to improve flexibility of the investigations and offers a well localised chemical probing from distances up to 10-13 m. Since LIMS is a mass spectrometric technique it allows for very sensitive measurements of elements and isotopes. We will demonstrate the results of the current performance tests obtained by application of a miniaturised laser ablation/ionisation mass spectrometer, a LIMS instrument, developed in Bern for the chemical analysis of solids. So far, the only LIMS instrument on a spacecraft is the LAZMA instrument. This spectrometer was a part of the payload for PHOBOS-GRUNT mission and is also currently selected for LUNA-RESURCE and LUNA-GLOB missions to the lunar south poles (Managadze et al., 2011). Our LIMS instrument has the dimensions of 120 x Ø60 mm and with a weight of about 1.5 kg (all electronics included), it is the lightest mass analyser designed for in situ chemical analysis of solid materials on the planetary surfaces (Rohner et al., 2003). Initial laboratory tests that were conducted with an IR laser radiation for the ablation, atomisation and ionisation of the material, indicated a high performance of the instrument in terms of sensitivity, dynamic range and mass resolution (Tulej et al., 2011). After some technical improvements and implementation of a computer-controlled performance optimiser we have achieved further improvements of both, the instrumental sensitivity down to sub-ppm level and reproducibility of the measurements. We will demonstrate the potential of the mass analyser to perform the quantitative elemental analysis of solids with a spatial (vertical, lateral) resolution commensurate with typical grain sizes, and its capabilities for investigation of isotopic patterns with accuracy and precision comparable to that of large analytical laboratory instruments, e.g., TIMS, SIMS, LA-ICP-MS. The results can be of considerable interest for in situ dating or investigation of other fine isotopic fractionation effects including studies of bio-markers.

  1. Subgroup Economic Evaluation of Radiotherapy for Breast Cancer After Mastectomy.

    PubMed

    Wan, Xiaomin; Peng, Liubao; Ma, Jinan; Chen, Gannong; Li, Yuanjian

    2015-11-01

    A recent meta-analysis by the Early Breast Cancer Trialists' Collaborative Group found significant improvements achieved by postmastectomy radiotherapy (PMRT) for patients with breast cancer with 1 to 3 positive nodes (pN1-3). It is unclear whether PMRT is cost-effective for subgroups of patients with positive nodes. To determine the cost-effectiveness of PMRT for subgroups of patients with breast cancer with positive nodes. A semi-Markov model was constructed to estimate the expected lifetime costs, life expectancy, and quality-adjusted life-years for patients receiving or not receiving radiation therapy. Clinical and health utilities data were from meta-analyses by the Early Breast Cancer Trialists' Collaborative Group or randomized clinical trials. Costs were estimated from the perspective of the Chinese society. One-way and probabilistic sensitivity analyses were performed. The incremental cost-effective ratio was estimated as $7984, $4043, $3572, and $19,021 per quality-adjusted life-year for patients with positive nodes (pN+), patients with pN1-3, patients with pN1-3 who received systemic therapy, and patients with >4 positive nodes (pN4+), respectively. According to World Health Organization recommendations, these incremental cost-effective ratios were judged as cost-effective. However, the results of one-way sensitivity analyses suggested that the results were highly sensitive to the relative effectiveness of PMRT (rate ratio). We determined that the results were highly sensitive to the rate ratio. However, the addition of PMRT for patients with pN1-3 in China has a reasonable chance to be cost-effective and may be judged as an efficient deployment of limited health resource, and the risk and uncertainty of PMRT are relatively greater for patients with pN4+. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.

  2. Cost effectiveness of imatinib compared with interferon-alpha or hydroxycarbamide for first-line treatment of chronic myeloid leukaemia.

    PubMed

    Dalziel, Kim; Round, Ali; Garside, Ruth; Stein, Ken

    2005-01-01

    To evaluate the cost utility of imatinib compared with interferon (IFN)-alpha or hydroxycarbamide (hydroxyurea) for first-line treatment of chronic myeloid leukaemia. A cost-utility (Markov) model within the setting of the UK NHS and viewed from a health system perspective was adopted. Transition probabilities and relative risks were estimated from published literature. Costs of drug treatment, outpatient care, bone marrow biopsies, radiography, blood transfusions and inpatient care were obtained from the British National Formulary and local hospital databases. Costs (pound, year 2001-03 values) were discounted at 6%. Quality-of-life (QOL) data were obtained from the published literature and discounted at 1.5%. The main outcome measure was cost per QALY gained. Extensive one-way sensitivity analyses were performed along with probabilistic (stochastic) analysis. The incremental cost-effectiveness ratio (ICER) of imatinib, compared with IFNalpha, was pound26,180 per QALY gained (one-way sensitivity analyses ranged from pound19,449 to pound51,870) and compared with hydroxycarbamide was pound86,934 per QALY (one-way sensitivity analyses ranged from pound69,701 to pound147,095) [ pound1=$US1.691=euro1.535 as at 31 December 2002].Based on the probabilistic sensitivity analysis, 50% of the ICERs for imatinib, compared with IFNalpha, fell below a threshold of approximately pound31,000 per QALY gained. Fifty percent of ICERs for imatinib, compared with hydroxycarbamide, fell below approximately pound95,000 per QALY gained. This model suggests, given its underlying data and assumptions, that imatinib may be moderately cost effective when compared with IFNalpha but considerably less cost effective when compared with hydroxycarbamide. There are, however, many uncertainties due to the lack of long-term data.

  3. Supraorbital Versus Endoscopic Endonasal Approaches for Olfactory Groove Meningiomas: A Cost-Minimization Study.

    PubMed

    Gandhoke, Gurpreet S; Pease, Matthew; Smith, Kenneth J; Sekula, Raymond F

    2017-09-01

    To perform a cost-minimization study comparing the supraorbital and endoscopic endonasal (EEA) approach with or without craniotomy for the resection of olfactory groove meningiomas (OGMs). We built a decision tree using probabilities of gross total resection (GTR) and cerebrospinal fluid (CSF) leak rates with the supraorbital approach versus EEA with and without additional craniotomy. The cost (not charge or reimbursement) at each "stem" of this decision tree for both surgical options was obtained from our hospital's finance department. After a base case calculation, we applied plausible ranges to all parameters and carried out multiple 1-way sensitivity analyses. Probabilistic sensitivity analyses confirmed our results. The probabilities of GTR (0.8) and CSF leak (0.2) for the supraorbital craniotomy were obtained from our series of 5 patients who underwent a supraorbital approach for the resection of an OGM. The mean tumor volume was 54.6 cm 3 (range, 17-94.2 cm 3 ). Literature-reported rates of GTR (0.6) and CSF leak (0.3) with EEA were applied to our economic analysis. Supraorbital craniotomy was the preferred strategy, with an expected value of $29,423, compared with an EEA cost of $83,838. On multiple 1-way sensitivity analyses, supraorbital craniotomy remained the preferred strategy, with a minimum cost savings of $46,000 and a maximum savings of $64,000. Probabilistic sensitivity analysis found the lowest cost difference between the 2 surgical options to be $37,431. Compared with EEA, supraorbital craniotomy provides substantial cost savings in the treatment of OGMs. Given the potential differences in effectiveness between approaches, a cost-effectiveness analysis should be undertaken. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM

    NASA Astrophysics Data System (ADS)

    Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz

    2018-03-01

    The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.

  5. The 2006 Kennedy Space Center Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the Performance of the National Aeronautics and Space Administration's Space Shuttle Vehicle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Decker, Ryan; Harrington, Brian; Merry, Carl

    2008-01-01

    The Kennedy Space Center (KSC) Range Reference Atmosphere (RRA) is a statistical model that summarizes wind and thermodynamic atmospheric variability from surface to 70 km. The National Aeronautics and Space Administration's (NASA) Space Shuttle program, which launches from KSC, utilizes the KSC RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the KSC RRA was recently completed. As part of the update, the Natural Environments Branch at NASA's Marshall Space Flight Center (MSFC) conducted a validation study and a comparison analysis to the existing KSC RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed by JSC/Ascent Flight Design Division to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  6. Cost-Effectiveness of Diagnostic Strategies for Suspected Scaphoid Fractures.

    PubMed

    Yin, Zhong-Gang; Zhang, Jian-Bing; Gong, Ke-Tong

    2015-08-01

    The aim of this study was to assess the cost effectiveness of multiple competing diagnostic strategies for suspected scaphoid fractures. With published data, the authors created a decision-tree model simulating the diagnosis of suspected scaphoid fractures. Clinical outcomes, costs, and cost effectiveness of immediate computed tomography (CT), day 3 magnetic resonance imaging (MRI), day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, week 2 radiographs-MRI, week 2 radiographs-bone scan, and immediate MRI were evaluated. The primary clinical outcome was the detection of scaphoid fractures. The authors adopted societal perspective, including both the costs of healthcare and the cost of lost productivity. The incremental cost-effectiveness ratio (ICER), which expresses the incremental cost per incremental scaphoid fracture detected using a strategy, was calculated to compare these diagnostic strategies. Base case analysis, 1-way sensitivity analyses, and "worst case scenario" and "best case scenario" sensitivity analyses were performed. In the base case, the average cost per scaphoid fracture detected with immediate CT was $2553. The ICER of immediate MRI and day 3 MRI compared with immediate CT was $7483 and $32,000 per scaphoid fracture detected, respectively. The ICER of week 2 radiographs-MRI was around $170,000. Day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, and week 2 radiographs-bone scan strategy were dominated or extendedly dominated by MRI strategies. The results were generally robust in multiple sensitivity analyses. Immediate CT and MRI were the most cost-effective strategies for diagnosing suspected scaphoid fractures. Economic and Decision Analyses Level II. See Instructions for Authors for a complete description of levels of evidence.

  7. Cost-effectiveness of breast cancer screening using mammography in Vietnamese women

    PubMed Central

    2018-01-01

    Background The incidence rate of breast cancer is increasing and has become the most common cancer in Vietnamese women while the survival rate is lower than that of developed countries. Early detection to improve breast cancer survival as well as reducing risk factors remains the cornerstone of breast cancer control according to the World Health Organization (WHO). This study aims to evaluate the costs and outcomes of introducing a mammography screening program for Vietnamese women aged 45–64 years, compared to the current situation of no screening. Methods Decision analytical modeling using Markov chain analysis was used to estimate costs and health outcomes over a lifetime horizon. Model inputs were derived from published literature and the results were reported as incremental cost-effectiveness ratios (ICERs) and/or incremental net monetary benefits (INMBs). One-way sensitivity analyses and probabilistic sensitivity analyses were performed to assess parameter uncertainty. Results The ICER per life year gained of the first round of mammography screening was US$3647.06 and US$4405.44 for women aged 50–54 years and 55–59 years, respectively. In probabilistic sensitivity analyses, mammography screening in the 50–54 age group and the 55–59 age group were cost-effective in 100% of cases at a threshold of three times the Vietnamese Gross Domestic Product (GDP) i.e., US$6332.70. However, less than 50% of the cases in the 60–64 age group and 0% of the cases in the 45–49 age group were cost effective at the WHO threshold. The ICERs were sensitive to the discount rate, mammography sensitivity, and transition probability from remission to distant recurrence in stage II for all age groups. Conclusion From the healthcare payer viewpoint, offering the first round of mammography screening to Vietnamese women aged 50–59 years should be considered, with the given threshold of three times the Vietnamese GDP per capita. PMID:29579131

  8. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  9. Influence of Cobalt on the Properties of Load-Sensitive Magnesium Alloys

    PubMed Central

    Klose, Christian; Demminger, Christian; Mroz, Gregor; Reimche, Wilfried; Bach, Friedrich-Wilhelm; Maier, Hans Jürgen; Kerber, Kai

    2013-01-01

    In this study, magnesium is alloyed with varying amounts of the ferromagnetic alloying element cobalt in order to obtain lightweight load-sensitive materials with sensory properties which allow an online-monitoring of mechanical forces applied to components made from Mg-Co alloys. An optimized casting process with the use of extruded Mg-Co powder rods is utilized which enables the production of magnetic magnesium alloys with a reproducible Co concentration. The efficiency of the casting process is confirmed by SEM analyses. Microstructures and Co-rich precipitations of various Mg-Co alloys are investigated by means of EDS and XRD analyses. The Mg-Co alloys' mechanical strengths are determined by tensile tests. Magnetic properties of the Mg-Co sensor alloys depending on the cobalt content and the acting mechanical load are measured utilizing the harmonic analysis of eddy-current signals. Within the scope of this work, the influence of the element cobalt on magnesium is investigated in detail and an optimal cobalt concentration is defined based on the performed examinations. PMID:23344376

  10. Association between alcohol consumption and amyotrophic lateral sclerosis: a meta-analysis of five observational studies.

    PubMed

    E, Meng; Yu, Sufang; Dou, Jianrui; Jin, Wu; Cai, Xiang; Mao, Yiyang; Zhu, Daojian; Yang, Rumei

    2016-08-01

    The purpose of this study is to examine the association between alcohol consumption and amyotrophic lateral sclerosis. Published literature on the association between alcohol consumption and amyotrophic lateral sclerosis was retrieved from the PubMed and Embase databases. Two authors independently extracted the data. The quality of the identified studies was evaluated according to the Newcastle-Ottawa scale. Subgroup and sensitivity analyses were performed and publication bias was assessed. Five articles, including one cohort study and seven case-control studies, and a total of 431,943 participants, were identified. The odds ratio for the association between alcohol consumption and amyotrophic lateral sclerosis was 0.57 (95 % confidence interval 0.51-0.64). Subgroup and sensitivity analyses confirmed the result. Evidence for publication bias was detected. Alcohol consumption reduced the risk of developing amyotrophic lateral sclerosis compared with non-drinking. Alcohol, therefore, has a potentially neuroprotective effect on the development of amyotrophic lateral sclerosis.

  11. Can we use high precision metal isotope analysis to improve our understanding of cancer?

    PubMed

    Larner, Fiona

    2016-01-01

    High precision natural isotope analyses are widely used in geosciences to trace elemental transport pathways. The use of this analytical tool is increasing in nutritional and disease-related research. In recent months, a number of groups have shown the potential this technique has in providing new observations for various cancers when applied to trace metal metabolism. The deconvolution of isotopic signatures, however, relies on mathematical models and geochemical data, which are not representative of the system under investigation. In addition to relevant biochemical studies of protein-metal isotopic interactions, technological development both in terms of sample throughput and detection sensitivity of these elements is now needed to translate this novel approach into a mainstream analytical tool. Following this, essential background healthy population studies must be performed, alongside observational, cross-sectional disease-based studies. Only then can the sensitivity and specificity of isotopic analyses be tested alongside currently employed methods, and important questions such as the influence of cancer heterogeneity and disease stage on isotopic signatures be addressed.

  12. Boundary Layer Depth In Coastal Regions

    NASA Astrophysics Data System (ADS)

    Porson, A.; Schayes, G.

    The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.

  13. A comparative analysis of area navigation systems in general aviation. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Dodge, S. M.

    1973-01-01

    Radio navigation systems which offer the capabilities of area navigation to general aviation operators are discussed. The systems considered are: (1) the VORTAC system, (2) the Loran-C system, and (3) the Differential Omega system. The inital analyses are directed toward a comparison of the systems with respect to their compliance to specified performance parameters and to the cost effectiveness of each system in relation to those specifications. Further analyses lead to the development of system cost sensitivity charts, and the employment of these charts allows conclusions to be drawn relative to the cost-effectiveness of the candidate navigation system.

  14. The Usefulness of Selected Physicochemical Indices, Cell Membrane Integrity and Sperm Chromatin Structure in Assessments of Boar Semen Sensitivity

    PubMed Central

    Wysokińska, A.; Kondracki, S.; Iwanina, M.

    2015-01-01

    The present work describes experiments undertaken to evaluate the usefulness of selected physicochemical indices of semen, cell membrane integrity and sperm chromatin structure for the assessment of boar semen sensitivity to processes connected with pre-insemination procedures. The experiments were carried out on 30 boars: including 15 regarded as providers of sensitive semen and 15 regarded as providers of semen that is little sensitive to laboratory processing. The selection of boars for both groups was based on sperm morphology analyses, assuming secondary morphological change incidence in spermatozoa as the criterion. Two ejaculates were manually collected from each boar at an interval of 3 to 4 months. The following analyses were carried out for each ejaculate: sperm motility assessment, sperm pH measurement, sperm morphology assessment, sperm chromatin structure evaluation and cell membrane integrity assessment. The analyses were performed three times. Semen storage did not cause an increase in the incidence of secondary morphological changes in the group of boars considered to provide sperm of low sensitivity. On the other hand, with continued storage there was a marked increase in the incidence of spermatozoa with secondary morphological changes in the group of boars regarded as producing more sensitive semen. Ejaculates of group I boars evaluated directly after collection had an approximately 6% smaller share of spermatozoa with undamaged cell membranes than the ejaculates of boars in group II (p≤0.05). In the process of time the percentage of spermatozoa with undamaged cell membranes decreased. The sperm of group I boars was characterised with a lower sperm motility than the semen of group II boars. After 1 hour of storing diluted semen, the sperm motility of boars producing highly sensitive semen was already 4% lower (p≤0.05), and after 24 hours of storage it was 6.33% lower than that of the boars that produced semen with a low sensitivity. Factors that confirm the accuracy of insemination male selection can include a low rate of sperm motility decrease during the storage of diluted semen, low and contained incidence of secondary morphological changes in spermatozoa during semen storage and a high frequency of spermatozoa with undamaged cell membranes. PMID:26580438

  15. New developments in supra-threshold perimetry.

    PubMed

    Henson, David B; Artes, Paul H

    2002-09-01

    To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.

  16. Gram staining of protected pulmonary specimens in the early diagnosis of ventilator-associated pneumonia.

    PubMed

    Mimoz, O; Karim, A; Mazoit, J X; Edouard, A; Leprince, S; Nordmann, P

    2000-11-01

    We evaluated prospectively the use of Gram staining of protected pulmonary specimens to allow the early diagnosis of ventilator-associated pneumonia (VAP), compared with the use of 60 bronchoscopic protected specimen brushes (PSB) and 126 blinded plugged telescopic catheters (PTC) obtained from 134 patients. Gram stains were from Cytospin slides; they were studied for the presence of microorganisms in 10 and 50 fields by two independent observers and classified according to their Gram stain morphology. Quantitative cultures were performed after serial dilution and plating on appropriate culture medium. A final diagnosis of VAP, based on a culture of > or = 10(3) c.f.u. ml-1, was established after 81 (44%) samplings. When 10 fields were analysed, a strong relationship was found between the presence of bacteria on Gram staining and the final diagnosis of VAP (for PSB and PTC respectively: sensitivity 74 and 81%, specificity 94 and 100%, positive predictive value 91 and 100%, negative predictive value 82 and 88%). The correlation was less when we compared the morphology of microorganisms observed on Gram staining with those of bacteria obtained from quantitative cultures (for PSB and PTC respectively: sensitivity 54 and 69%, specificity 86 and 89%, positive predictive value 72 and 78%, negative predictive value 74 and 84%). Increasing the number of fields read to 50 was associated with a slight decrease in specificity and positive predictive value of Gram staining, but with a small increase in its sensitivity and negative predictive value. The results obtained by the two observers were similar to each other for both numbers of fields analysed. Gram staining of protected pulmonary specimens performed on 10 fields predicted the presence of VAP and partially identified (using Gram stain morphology) the microorganisms growing at significant concentrations, and could help in the early choice of the treatment of VAP. Increasing the number of fields read or having the Gram stain analysed by two independent individuals did not improve the results.

  17. Carbon nuclear magnetic resonance spectroscopic fingerprinting of commercial gasoline: pattern-recognition analyses for screening quality control purposes.

    PubMed

    Flumignan, Danilo Luiz; Boralle, Nivaldo; Oliveira, José Eduardo de

    2010-06-30

    In this work, the combination of carbon nuclear magnetic resonance ((13)C NMR) fingerprinting with pattern-recognition analyses provides an original and alternative approach to screening commercial gasoline quality. Soft Independent Modelling of Class Analogy (SIMCA) was performed on spectroscopic fingerprints to classify representative commercial gasoline samples, which were selected by Hierarchical Cluster Analyses (HCA) over several months in retails services of gas stations, into previously quality-defined classes. Following optimized (13)C NMR-SIMCA algorithm, sensitivity values were obtained in the training set (99.0%), with leave-one-out cross-validation, and external prediction set (92.0%). Governmental laboratories could employ this method as a rapid screening analysis to discourage adulteration practices. Copyright 2010 Elsevier B.V. All rights reserved.

  18. Turbulent stresses in the surf-zone: Which way is up?

    USGS Publications Warehouse

    Haines, John W.; Gelfenbaum, Guy; Edge, B.L

    1997-01-01

    Velocity observations from a vertical stack of three-component Acoustic Doppler Velocimeters (ADVs) within the energetic surf-zone are presented. Rapid temporal sampling and small sampling volume provide observations suitable for investigation of the role of turbulent fluctuations in surf-zone dynamics. While sensor performance was good, failure to recover reliable measures of tilt from the vertical compromise the data value. We will present some cursory observations supporting the ADV performance, and examine the sensitivity of stress estimates to uncertainty in the sensor orientation. It is well known that turbulent stress estimates are highly sensitive to orientation relative to vertical when wave motions are dominant. Analyses presented examine the potential to use observed flow-field characteristics to constrain sensor orientation. Results show that such an approach may provide a consistent orientation to a fraction of a degree, but the inherent sensitivity of stress estimates requires a still more restrictive constraint. Regardless, the observations indicate the degree to which stress estimates are dependent on orientation, and provide some indication of the temporal variability in time-averaged stress estimates.

  19. Giardiasis: an update review on sensitivity and specificity of methods for laboratorial diagnosis.

    PubMed

    Soares, Renata; Tasca, Tiana

    2016-10-01

    Giardiasis is a major cause of diarrhoea transmitted by ingestion of contaminated water and food with cysts, and it has been spread among people with poor oral hygiene. The traditional diagnosis is performed by identifying trophozoites and cysts of Giardia duodenalis through microscopy of faecal samples. In addition to microscopy, different methods have been validated for giardiasis diagnosis which are based on immunologic and molecular analyses. The aim of this study was to conduct a review of the main methods applied in clinical laboratory for diagnosis of giardiasis, in the last 10years, regarding the specificity and sensitivity criteria. It was observed high variability in the performance of the same methodology across studies; however, several techniques have been considered better than microscopy. The later, although gold standard, presents low sensitivity in cases of low number of cysts in the sample, and the experience of the microscopist must also be considered. We conclude that microscopy should still be held and complementary technique is recommended, in order to provide a reliable diagnosis and a proper treatment of the patient. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Preparation and characterization of AuNPs/CNTs-ErGO electrochemical sensors for highly sensitive detection of hydrazine.

    PubMed

    Zhao, Zhenting; Sun, Yongjiao; Li, Pengwei; Zhang, Wendong; Lian, Kun; Hu, Jie; Chen, Yong

    2016-09-01

    A highly sensitive electrochemical sensor of hydrazine has been fabricated by Au nanoparticles (AuNPs) coating of carbon nanotubes-electrochemical reduced graphene oxide composite film (CNTs-ErGO) on glassy carbon electrode (GCE). Cyclic voltammetry and potential amperometry have been used to investigate the electrochemical properties of the fabricated sensors for hydrazine detection. The performances of the sensors were optimized by varying the CNTs to ErGO ratio and the quantity of Au nanoparticles. The results show that under optimal conditions, a sensitivity of 9.73μAμM(-1)cm(-2), a short response time of 3s, and a low detection limit of 0.065μM could be achieved with a linear concentration response range from 0.3μM to 319μM. The enhanced electrochemical performances could be attributed to the synergistic effect between AuNPs and CNTs-ErGO film and the outstanding catalytic effect of the Au nanoparticles. Finally, the sensor was successfully used to analyse the tap water, showing high potential for practical applications. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. [What is hidden behind the Baking Tray Task? Study of sensibility and specificity in right-hemispheric stroke patients].

    PubMed

    Garcia-Fernandez, Juan; Garcia-Molina, Alberto; Aparicio-Lopez, Celeste; Sanchez-Carrion, Rocío; Ensenat, Antònia; Pena-Casanova, Jordi; Roig-Rovira, Teresa

    2015-12-16

    Tham and Tegner proposed the Baking Tray Task (BTT) as a fast simple assessment test for detecting spatial negligence. However, very few studies have examined its validity as a diagnostic test. To analyse the diagnostic validity of the BTT by measuring its specificity and sensitivity in a sample of subjects with right hemisphere strokes. Forty-eight patients with right hemisphere vascular lesions were distributed in two groups (negligence group, n = 35; non-negligence group, n = 13) according to the scores obtained in a battery of visuospatial examination tests. The participants' performance on the BTT was compared with that of a healthy control group (n = 12). The results showed a high level of sensitivity of the BTT, but low specificity. The performance on the BTT of eight of the 13 members of the non-negligence group was suggestive of negligence. The BTT has proved to be a sensitive test for the detection of spatial negligence. Yet, based on its low specificity, its use alone as a single diagnostic test is not recommended.

  2. Validation of a portable nitric oxide analyzer for screening in primary ciliary dyskinesias.

    PubMed

    Harris, Amanda; Bhullar, Esther; Gove, Kerry; Joslin, Rhiannon; Pelling, Jennifer; Evans, Hazel J; Walker, Woolf T; Lucas, Jane S

    2014-02-10

    Nasal nitric oxide (nNO) levels are very low in primary ciliary dyskinesia (PCD) and it is used as a screening test. We assessed the reliability and usability of a hand-held analyser in comparison to a stationary nitric oxide (NO) analyser in 50 participants (15 healthy, 13 PCD, 22 other respiratory diseases; age 6-79 years). Nasal NO was measured using a stationary NO analyser during a breath-holding maneuver, and using a hand-held analyser during tidal breathing, sampling at 2 ml/sec or 5 ml/sec. The three methods were compared for their specificity and sensitivity as a screen for PCD, their success rate in different age groups, within subject repeatability and acceptability. Correlation between methods was assessed. Valid nNO measurements were obtained in 94% of participants using the stationary analyser, 96% using the hand-held analyser at 5 ml/sec and 76% at 2 ml/sec. The hand-held device at 5 ml/sec had excellent sensitivity and specificity as a screening test for PCD during tidal breathing (cut-off of 30 nL/min,100% sensitivity, >95% specificity). The cut-off using the stationary analyser during breath-hold was 38 nL/min (100% sensitivity, 95% specificity). The stationary and hand-held analyser (5 ml/sec) showed reasonable within-subject repeatability(% coefficient of variation = 15). The hand-held NO analyser provides a promising screening tool for PCD.

  3. Comparison of standard- and nano-flow liquid chromatography platforms for MRM-based quantitation of putative plasma biomarker proteins.

    PubMed

    Percy, Andrew J; Chambers, Andrew G; Yang, Juncong; Domanski, Dominik; Borchers, Christoph H

    2012-09-01

    The analytical performance of a standard-flow ultra-high-performance liquid chromatography (UHPLC) and a nano-flow high-performance liquid chromatography (HPLC) system, interfaced to the same state-of-the-art triple-quadrupole mass spectrometer, were compared for the multiple reaction monitoring (MRM)-mass spectrometry (MS)-based quantitation of a panel of 48 high-to-moderate-abundance cardiovascular disease-related plasma proteins. After optimization of the MRM transitions for sensitivity and testing for chemical interference, the optimum sensitivity, loading capacity, gradient, and retention-time reproducibilities were determined. We previously demonstrated the increased robustness of the standard-flow platform, but we expected that the standard-flow platform would have an overall lower sensitivity. This study was designed to determine if this decreased sensitivity could be compensated for by increased sample loading. Significantly fewer interferences with the MRM transitions were found for the standard-flow platform than for the nano-flow platform (2 out of 103 transitions compared with 42 out of 103 transitions, respectively), which demonstrates the importance of interference-testing when nano-flow systems are used. Using only interference-free transitions, 36 replicate LC/MRM-MS analyses resulted in equal signal reproducibilities between the two platforms (9.3 % coefficient of variation (CV) for 88 peptide targets), with superior retention-time precision for the standard-flow platform (0.13 vs. 6.1 % CV). Surprisingly, for 41 of the 81 proteotypic peptides in the final assay, the standard-flow platform was more sensitive while for 9 of 81 the nano-flow platform was more sensitive. For these 81 peptides, there was a good correlation between the two sets of results (R(2) = 0.98, slope = 0.97). Overall, the standard-flow platform had superior performance metrics for most peptides, and is a good choice if sufficient sample is available.

  4. Determining the sensitivity of transrectal ultrasonography on a consecutive series of prostate cancers in a clinical and screening setting.

    PubMed

    Ciatto, Stefano; Bonardi, Rita; Lombardi, Claudio; Zappa, Marco; Gervasi, Ginetta

    2002-01-01

    To evaluate the sensitivity at transrectal ultrasonography (TRUS) for prostate cancer. A consecutive series of 170 prostate cancers identified by matching local cancer registry and TRUS archives at the Centro per lo Studio e la Prevenzione Oncologica of Florence. TRUS sensitivity was determined as the ratio of TRUS positive to total prostate cancers occurring at different intervals from TRUS date. Univariate and multivariate analyses of sensitivity determinants were performed. Sensitivity at 6 months, 1, 2 and 3 years after the test was 94.1% (95% CI, 90-98), 89.8% (95% CI, 85-95), 80.4% (95% CI, 74-87) and 74.1% (95% CI, 68-81%), respectively. A higher sensitivity (statistically significant) of TRUS was observed only if digital rectal examination was suspicious, whereas no association to sensitivity was observed for age, prostate-specific antigen or prostate-specific antigen density. The study provided a reliable estimate of TRUS sensitivity, particularly reliable being checked against a cancer registry: observed sensitivity was high, at least of the same magnitude of other cancer screening tests. TRUS, which is known to allow for considerable diagnostic anticipation and is more specific than prostate-specific antigen, might still be considered for its contribution to a screening approach.

  5. Prediction of amyloid-β pathology in amnestic mild cognitive impairment with neuropsychological tests.

    PubMed

    Bahar-Fuchs, Alex; Villemagne, Victor; Ong, Kevin; Chetélat, Gaël; Lamb, Fiona; Reininger, Cornelia B; Woodward, Michael; Rowe, Christopher C

    2013-01-01

    Assessment of disease biomarkers, particularly the in vivo assessment of amyloid-β (Aβ) burden with positron emission tomography (PET), is gradually becoming central to the diagnosis of mild cognitive impairment (MCI) due to Alzheimer's disease (AD). However, the incorporation of biomarker evidence to the diagnostic process is currently restricted mainly to research settings. The identification of memory measures that are associated with Aβ is of clinical relevance as this may enhance the confidence in making a diagnosis of MCI due to AD in clinical settings. Forty one persons with amnestic MCI underwent Aβ imaging with (18)F-Florbetaben PET, magnetic resonance imaging, and a comprehensive neuropsychological assessment. All measures of episodic memory were significantly correlated with Aβ burden, but regression analyses revealed a strong and selective association between story recall and Aβ over and beyond the effects of age, education, global cognition, hippocampal volume, or other memory tests. Analyses of sensitivity and specificity of memory measures to detect high versus low Aβ scans suggested that word-list recall performed better when high sensitivity was preferred, whereas story recall performed better when high specificity was preferred. In conclusion, a measure of story recall may increase the confidence in making a diagnosis of MCI due to AD in clinical settings.

  6. Routine Use of Contrast Swallow After Total Gastrectomy and Esophagectomy: Is it Justified?

    PubMed

    El-Sourani, Nader; Bruns, Helge; Troja, Achim; Raab, Hans-Rudolf; Antolovic, Dalibor

    2017-01-01

    After gastrectomy or esophagectomy, esophagogastrostomy and esophagojejunostomy are commonly used for reconstruction. Water-soluble contrast swallow is often used as a routine screening to exclude anastomotic leakage during the first postoperative week. In this retrospective study, the sensitivity and specificity of oral water-soluble contrast swallow for the detection of anastomotic leakage and its clinical symptoms were analysed. Records of 104 consecutive total gastrectomies and distal esophagectomies were analysed. In all cases, upper gastrointestinal contrast swallow with the use of a water-soluble contrast agent was performed on the 5 th postoperative day. Extravasation of the contrast agent was defined as anastomotic leakage. When anastomotic insufficiency was suspected but no extravasation was present, a computed tomography (CT) scan and upper endoscopy were performed. Oral contrast swallow detected 7 anastomotic leaks. Based on CT-scans and upper endoscopy, the true number of anastomotic leakage was 15. The findings of the oral contrast swallow were falsely positive in 4 and falsely negative in 12 patients, respectively. The sensitivity and specificity of the oral contrast swallow was 20% and 96%, respectively. Routine radiological contrast swallow following total gastrectomy or distal esophagectomy cannot be recommended. When symptoms of anastomotic leakage are present, a CT-scan and endoscopy are currently the methods of choice.

  7. Automated segmentation and dose-volume analysis with DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Thomas, S.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Wu, J.

    2014-03-01

    Purpose: Exploration of historical data for regional organ dose sensitivity is limited by the effort needed to (sub-)segment large numbers of contours. A system has been developed which can rapidly perform autonomous contour sub-segmentation and generic dose-volume computations, substantially reducing the effort required for exploratory analyses. Methods: A contour-centric approach is taken which enables lossless, reversible segmentation and dramatically reduces computation time compared with voxel-centric approaches. Segmentation can be specified on a per-contour, per-organ, or per-patient basis, and can be performed along either an embedded plane or in terms of the contour's bounds (e.g., split organ into fractional-volume/dose pieces along any 3D unit vector). More complex segmentation techniques are available. Anonymized data from 60 head-and-neck cancer patients were used to compare dose-volume computations with Varian's EclipseTM (Varian Medical Systems, Inc.). Results: Mean doses and Dose-volume-histograms computed agree strongly with Varian's EclipseTM. Contours which have been segmented can be injected back into patient data permanently and in a Digital Imaging and Communication in Medicine (DICOM)-conforming manner. Lossless segmentation persists across such injection, and remains fully reversible. Conclusions: DICOMautomaton allows researchers to rapidly, accurately, and autonomously segment large amounts of data into intricate structures suitable for analyses of regional organ dose sensitivity.

  8. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE PAGES

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.; ...

    2017-06-21

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  9. Sensitivity and Limitations of Structures from X-ray and Neutron-Based Diffraction Analyses of Transition Metal Oxide Lithium-Battery Electrodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Liu, Haodong; Lapidus, Saul H.

    Lithium transition metal oxides are an important class of electrode materials for lithium-ion batteries. Binary or ternary (transition) metal doping brings about new opportunities to improve the electrode’s performance and often leads to more complex stoichiometries and atomic structures than the archetypal LiCoO 2. Rietveld structural analyses of X-ray and neutron diffraction data is a widely-used approach for structural characterization of crystalline materials. But, different structural models and refinement approaches can lead to differing results, and some parameters can be difficult to quantify due to the inherent limitations of the data. Here, through the example of LiNi 0.8Co 0.15Al 0.05Omore » 2 (NCA), we demonstrated the sensitivity of various structural parameters in Rietveld structural analysis to different refinement approaches and structural models, and proposed an approach to reduce refinement uncertainties due to the inexact X-ray scattering factors of the constituent atoms within the lattice. Furthermore, this refinement approach was implemented for electrochemically-cycled NCA samples and yielded accurate structural parameters using only X-ray diffraction data. The present work provides the best practices for performing structural refinement of lithium transition metal oxides.« less

  10. Analysis of the causes of discrepancies in troponin I concentrations as measured by ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cTnI.

    PubMed

    Kondo, Takashi; Kobayashi, Daisuke; Mochizuki, Maki; Asanuma, Kouichi; Takahashi, Satoshi

    2017-01-01

    Background Recently developed reagents for the highly sensitive measurement of cardiac troponin I are useful for early diagnosis of acute coronary syndrome. However, differences in measured values between these new reagents and previously used reagents have not been well studied. In this study, we aimed to compare the values between ARCHITECT High-Sensitive Troponin I ST (newly developed reagents), ARCHITECT Troponin I ST and STACIA CLEIA cardiac troponin I (two previously developed reagent kits). Methods Gel filtration high-performance liquid chromatography was used to analyse the causes of differences in measured values. Results The measured values differed between ARCHITECT High-Sensitive Troponin I ST and STACIA CLEIA cardiac troponin I reagents (r = 0.82). Cross-reactivity tests using plasma with added skeletal-muscle troponin I resulted in higher reactivity (2.17-3.03%) for the STACIA CLEIA cardiac troponin I reagents compared with that for the ARCHITECT High-Sensitive Troponin I ST reagents (less than 0.014%). In addition, analysis of three representative samples using gel filtration high-performance liquid chromatography revealed reagent-specific differences in the reactivity against each cardiac troponin I complex; this could explain the differences in values observed for some of the samples. Conclusion The newly developed ARCHITECT High-Sensitive Troponin I ST reagents were not affected by the presence of skeletal-muscle troponin I in the blood and may be useful for routine examinations.

  11. Lower sensitivity of serum (1,3)-β-d-glucan for the diagnosis of candidaemia due to Candida parapsilosis.

    PubMed

    Mikulska, M; Giacobbe, D R; Furfaro, E; Mesini, A; Marchese, A; Del Bono, V; Viscoli, C

    2016-07-01

    The aim of this study was to evaluate the sensitivity and the levels of 1,3-β-d-glucan (BDG) among patients with candidaemia due to different Candida species. Retrospective study of all patients who had a single-species candidaemia and BDG testing performed within 48 h from the onset of candidaemia during 2009-2015 was performed. Factors influencing the sensitivity of BDG, including the presence of a central venous catheter, antifungal therapy and Candida species, were analysed in univariate and multivariate models. In all, 107 patients with the following Candida distribution were included: 46 (43%) Candida albicans, 37 (35%) Candida parapsilosis, and 24 (22%) other species. BDG sensitivity and levels were the highest in C. albicans candidaemia and lowest for C. parapsilosis (respectively, 72% and 410 pg/mL for C. albicans, 41% and 39 pg/mL for C. parapsilosis, and 63% and 149 pg/mL for other species; p 0.015 and p 0.003). In multivariate analysis, Candida species (parapsilosis versus others) was the only factor influencing the sensitivity of BDG (OR 0.3, 95% CI 0.1-0.7, p 0.006). The sensitivity of BDG in candidaemia seems highly dependent on the fungal species, with the lowest being for C. parapsilosis. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  12. Epidermis Microstructure Inspired Graphene Pressure Sensor with Random Distributed Spinosum for High Sensitivity and Large Linearity.

    PubMed

    Pang, Yu; Zhang, Kunning; Yang, Zhen; Jiang, Song; Ju, Zhenyi; Li, Yuxing; Wang, Xuefeng; Wang, Danyang; Jian, Muqiang; Zhang, Yingying; Liang, Renrong; Tian, He; Yang, Yi; Ren, Tian-Ling

    2018-03-27

    Recently, wearable pressure sensors have attracted tremendous attention because of their potential applications in monitoring physiological signals for human healthcare. Sensitivity and linearity are the two most essential parameters for pressure sensors. Although various designed micro/nanostructure morphologies have been introduced, the trade-off between sensitivity and linearity has not been well balanced. Human skin, which contains force receptors in a reticular layer, has a high sensitivity even for large external stimuli. Herein, inspired by the skin epidermis with high-performance force sensing, we have proposed a special surface morphology with spinosum microstructure of random distribution via the combination of an abrasive paper template and reduced graphene oxide. The sensitivity of the graphene pressure sensor with random distribution spinosum (RDS) microstructure is as high as 25.1 kPa -1 in a wide linearity range of 0-2.6 kPa. Our pressure sensor exhibits superior comprehensive properties compared with previous surface-modified pressure sensors. According to simulation and mechanism analyses, the spinosum microstructure and random distribution contribute to the high sensitivity and large linearity range, respectively. In addition, the pressure sensor shows promising potential in detecting human physiological signals, such as heartbeat, respiration, phonation, and human motions of a pushup, arm bending, and walking. The wearable pressure sensor array was further used to detect gait states of supination, neutral, and pronation. The RDS microstructure provides an alternative strategy to improve the performance of pressure sensors and extend their potential applications in monitoring human activities.

  13. Lunar Exploration Architecture Level Key Drivers and Sensitivities

    NASA Technical Reports Server (NTRS)

    Goodliff, Kandyce; Cirillo, William; Earle, Kevin; Reeves, J. D.; Shyface, Hilary; Andraschko, Mark; Merrill, R. Gabe; Stromgren, Chel; Cirillo, Christopher

    2009-01-01

    Strategic level analysis of the integrated behavior of lunar transportation and lunar surface systems architecture options is performed to assess the benefit, viability, affordability, and robustness of system design choices. This analysis employs both deterministic and probabilistic modeling techniques so that the extent of potential future uncertainties associated with each option are properly characterized. The results of these analyses are summarized in a predefined set of high-level Figures of Merit (FOMs) so as to provide senior NASA Constellation Program (CxP) and Exploration Systems Mission Directorate (ESMD) management with pertinent information to better inform strategic level decision making. The strategic level exploration architecture model is designed to perform analysis at as high a level as possible but still capture those details that have major impacts on system performance. The strategic analysis methodology focuses on integrated performance, affordability, and risk analysis, and captures the linkages and feedbacks between these three areas. Each of these results leads into the determination of the high-level FOMs. This strategic level analysis methodology has been previously applied to Space Shuttle and International Space Station assessments and is now being applied to the development of the Constellation Program point-of-departure lunar architecture. This paper provides an overview of the strategic analysis methodology and the lunar exploration architecture analyses to date. In studying these analysis results, the strategic analysis team has identified and characterized key drivers affecting the integrated architecture behavior. These key drivers include inclusion of a cargo lander, mission rate, mission location, fixed-versus- variable costs/return on investment, and the requirement for probabilistic analysis. Results of sensitivity analysis performed on lunar exploration architecture scenarios are also presented.

  14. Vasa previa screening strategies: a decision and cost-effectiveness analysis.

    PubMed

    Sinkey, R G; Odibo, A O

    2018-05-22

    The aim of this study is to perform a decision and cost-effectiveness analysis comparing four screening strategies for the antenatal diagnosis of vasa previa among singleton pregnancies. A decision-analytic model was constructed comparing vasa previa screening strategies. Published probabilities and costs were applied to four transvaginal screening scenarios which occurred at the time of mid-trimester ultrasound: no screening, ultrasound-indicated screening, screening pregnancies conceived by in vitro fertilization (IVF), and universal screening. Ultrasound-indicated screening was defined as performing a transvaginal ultrasound at the time of routine anatomy ultrasound in response to one of the following sonographic findings associated with an increased risk of vasa previa: low-lying placenta, marginal or velamentous cord insertion, or bilobed or succenturiate lobed placenta. The primary outcome was cost per quality adjusted life years (QALY) in U.S. dollars. The analysis was from a healthcare system perspective with a willingness to pay (WTP) threshold of $100,000 per QALY selected. One-way and multivariate sensitivity analyses (Monte-Carlo simulation) were performed. This decision-analytic model demonstrated that screening pregnancies conceived by IVF was the most cost-effective strategy with an incremental cost effectiveness ratio (ICER) of $29,186.50 / QALY. Ultrasound-indicated screening was the second most cost-effective with an ICER of $56,096.77 / QALY. These data were robust to all one-way and multivariate sensitivity analyses performed. Within our baseline assumptions, transvaginal ultrasound screening for vasa previa appears to be most cost-effective when performed among IVF pregnancies. However, both IVF and ultrasound-indicated screening strategies fall within contemporary willingness-to-pay thresholds, suggesting that both strategies may be appropriate to apply in clinical practice. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. The 2006 Cape Canaveral Air Force Station Range Reference Atmosphere Model Validation Study and Sensitivity Analysis to the National Aeronautics and Space Administration's Space Shuttle

    NASA Technical Reports Server (NTRS)

    Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian

    2008-01-01

    The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.

  16. Efficient computation paths for the systematic analysis of sensitivities

    NASA Astrophysics Data System (ADS)

    Greppi, Paolo; Arato, Elisabetta

    2013-01-01

    A systematic sensitivity analysis requires computing the model on all points of a multi-dimensional grid covering the domain of interest, defined by the ranges of variability of the inputs. The issues to efficiently perform such analyses on algebraic models are handling solution failures within and close to the feasible region and minimizing the total iteration count. Scanning the domain in the obvious order is sub-optimal in terms of total iterations and is likely to cause many solution failures. The problem of choosing a better order can be translated geometrically into finding Hamiltonian paths on certain grid graphs. This work proposes two paths, one based on a mixed-radix Gray code and the other, a quasi-spiral path, produced by a novel heuristic algorithm. Some simple, easy-to-visualize examples are presented, followed by performance results for the quasi-spiral algorithm and the practical application of the different paths in a process simulation tool.

  17. Mid-L/D Lifting Body Entry Demise Analysis

    NASA Technical Reports Server (NTRS)

    Ling, Lisa

    2017-01-01

    The mid-lift-to-drag ratio (mid-L/D) lifting body is a fully autonomous spacecraft under design at NASA for enabling a rapid return of scientific payloads from the International Space Station (ISS). For contingency planning and risk assessment for the Earth-return trajectory, an entry demise analysis was performed to examine three potential failure scenarios: (1) nominal entry interface conditions with loss of control, (2) controlled entry at maximum flight path angle, and (3) controlled entry at minimum flight path angle. The objectives of the analysis were to predict the spacecraft breakup sequence and timeline, determine debris survival, and calculate the debris dispersion footprint. Sensitivity analysis was also performed to determine the effect of the initial pitch rate on the spacecraft stability and breakup during the entry. This report describes the mid-L/D lifting body and presents the results of the entry demise and sensitivity analyses.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyack, B.E.; Steiner, J.L.; Harmony, S.C.

    The PIUS Advanced Reactor is a 640-MW(e) pressurized-water reactor developed by Asea Brown Boveri. A unique feature of the PIUS concept is the absence of mechanical control and shutdown rods. Reactivity normally is controlled by the boron concentration in the coolant and the temperature of the moderator coolant. Analyses of five initiating events have been completed on the basis of calculations performed with the system neutronic and thermal-hydraulic analysis code TRAC-PF1/MOD2. The initiating events analyzed are (1) reactor scram, (2) loss of off-site power (3) main steam-line break, (4) small-break loss of coolant, and (5) large-break loss of coolant. Inmore » addition to the baseline calculation for each sequence, sensitivity studies were performed to explore the response of the PIUS reactor to severe off-normal conditions having a very low probability of occurrence. The sensitivity studies provide insights into the robustness of the design.« less

  19. Influential factors on thermoacoustic efficiency of multilayered graphene film loudspeakers for optimal design

    NASA Astrophysics Data System (ADS)

    Xing, Qianhe; Li, Shuang; Fan, Xueliang; Bian, Anhua; Cao, Shi-Jie; Li, Cheng

    2017-09-01

    Graphene thermoacoustic loudspeakers, composed of a graphene film on a substrate, generate sound with heat. Improving thermoacoustic efficiency of graphene speakers is a goal for optimal design. In this work, we first modified the existing TA model with respect to small thermal wavelengths, and then built an acoustic platform for model validation. Additionally, sensitivity analyses for influential factors on thermoacoustic efficiency were performed, including the thickness of multilayered graphene films, the thermal effusivity of substrates, and the characteristics of inserted gases. The higher sensitivity coefficients result in the stronger effects on thermoacoustic efficiency. We find that the thickness (5 nm-15 nm) of graphene films plays a trivial role in efficiency, resulting in the sensitivity coefficient less than 0.02. The substrate thermal effusivity, however, has significant effects on efficiency, with the sensitivity coefficient around 1.7. Moreover, substrates with a lower thermal effusivity show better acoustic performances. For influences of ambient gases, the sensitivity coefficients of density ρg, thermal conductivity κg, and specific heat cp,g are 2.7, 0.98, and 0.8, respectively. Furthermore, large magnitudes of both ρg and κg lead to a higher efficiency and the sound pressure level generated by graphene films is approximately proportional to the inverse of cp,g. These findings can refer to the optimal design for graphene thermoacoustic speakers.

  20. Temperature sensitivity analysis of polarity controlled electrostatically doped tunnel field-effect transistor

    NASA Astrophysics Data System (ADS)

    Nigam, Kaushal; Pandey, Sunil; Kondekar, P. N.; Sharma, Dheeraj

    2016-09-01

    The conventional tunnel field-effect transistors (TFETs) have shown potential to scale down in sub-22 nm regime due to its lower sub-threshold slope and robustness against short-channel effects (SCEs), however, sensitivity towards temperature variation is a major concern. Therefore, for the first time, we investigate temperature sensitivity analysis of a polarity controlled electrostatically doped tunnel field-effect transistor (ED-TFET). Different performance metrics and analog/RF figure-of-merits were considered and compared for both devices, and simulations were performed using Silvaco ATLAS device tool. We found that the variation in ON-state current in ED-TFET is almost temperature independent due to electrostatically doped mechanism, while, it increases in conventional TFET at higher temperature. Above room temperature, the variation in ION, IOFF, and SS sensitivity in ED-TFET are only 0.11%/K, 2.21%/K, and 0.63%/K, while, in conventional TFET the variations are 0.43%/K, 2.99%/K, and 0.71%/K, respectively. However, below room temperature, the variation in ED-TFET ION is 0.195%/K compared to 0.27%/K of conventional TFET. Moreover, it is analysed that the incomplete ionization effect in conventional TFET severely affects the drive current and the threshold voltage, while, ED-TFET remains unaffected. Hence, the proposed ED-TFET is less sensitive towards temperature variation and can be used for cryogenics as well as for high temperature applications.

  1. Bayesian sensitivity analysis methods to evaluate bias due to misclassification and missing data using informative priors and external validation data.

    PubMed

    Luta, George; Ford, Melissa B; Bondy, Melissa; Shields, Peter G; Stamey, James D

    2013-04-01

    Recent research suggests that the Bayesian paradigm may be useful for modeling biases in epidemiological studies, such as those due to misclassification and missing data. We used Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to the potential effect of these two important sources of bias. We used data from a study of the joint associations of radiotherapy and smoking with primary lung cancer among breast cancer survivors. We used Bayesian methods to provide an operational way to combine both validation data and expert opinion to account for misclassification of the two risk factors and missing data. For comparative purposes we considered a "full model" that allowed for both misclassification and missing data, along with alternative models that considered only misclassification or missing data, and the naïve model that ignored both sources of bias. We identified noticeable differences between the four models with respect to the posterior distributions of the odds ratios that described the joint associations of radiotherapy and smoking with primary lung cancer. Despite those differences we found that the general conclusions regarding the pattern of associations were the same regardless of the model used. Overall our results indicate a nonsignificantly decreased lung cancer risk due to radiotherapy among nonsmokers, and a mildly increased risk among smokers. We described easy to implement Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to misclassification and missing data. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Siemens Immulite Aspergillus-specific IgG assay for chronic pulmonary aspergillosis diagnosis.

    PubMed

    Page, Iain D; Richardson, Malcolm D; Denning, David W

    2018-05-14

    Chronic pulmonary aspergillosis (CPA) complicates underlying lung disease, including treated tuberculosis. Measurement of Aspergillus-specific immunoglobulin G (IgG) is a key diagnostic step. Cutoffs have been proposed based on receiver operating characteristic (ROC) curve analyses comparing CPA cases to healthy controls, but performance in at-risk populations with underlying lung disease is unclear. We evaluated optimal cutoffs for the Siemens Immulite Aspergillus-specific IgG assay for CPA diagnosis in relation to large groups of healthy and diseased controls with treated pulmonary tuberculosis. Sera from 241 patients with CPA attending the UK National Aspergillosis Centre, 299 Ugandan blood donors (healthy controls), and 398 Ugandans with treated pulmonary tuberculosis (diseased controls) were tested. Radiological screening removed potential CPA cases from diseased controls (234 screened diseased controls). ROC curve analyses were performed and optimal cutoffs identified by Youden J statistic. CPA versus control ROC area under curve (AUC) results were: healthy controls 0.984 (95% confidence interval 0.972-0.997), diseased controls 0.972 (0.959-0.985), screened diseased controls 0.979 (0.967-0.992). Optimal cutoffs were: healthy controls 15 mg/l (94.6% sensitivity, 98% specificity), unscreened diseased controls 15 mg/l (94.6% sensitivity, 94.5% specificity), screened diseased controls 25 mg/l (92.9% sensitivity, 98.7% specificity). Results were similar in healthy and diseased controls. We advocate a cutoff of 20 mg/l as this is the midpoint of the range of optimal cutoffs. Cutoffs calculated in relation to healthy controls for other assays are likely to remain valid for use in a treated tuberculosis population.

  3. Cost-utility analysis of an advanced pressure ulcer management protocol followed by trained wound, ostomy, and continence nurses.

    PubMed

    Kaitani, Toshiko; Nakagami, Gojiro; Iizaka, Shinji; Fukuda, Takashi; Oe, Makoto; Igarashi, Ataru; Mori, Taketoshi; Takemura, Yukie; Mizokami, Yuko; Sugama, Junko; Sanada, Hiromi

    2015-01-01

    The high prevalence of severe pressure ulcers (PUs) is an important issue that requires to be highlighted in Japan. In a previous study, we devised an advanced PU management protocol to enable early detection of and intervention for deep tissue injury and critical colonization. This protocol was effective for preventing more severe PUs. The present study aimed to compare the cost-effectiveness of the care provided using an advanced PU management protocol, from a medical provider's perspective, implemented by trained wound, ostomy, and continence nurses (WOCNs), with that of conventional care provided by a control group of WOCNs. A Markov model was constructed for a 1-year time horizon to determine the incremental cost-effectiveness ratio of advanced PU management compared with conventional care. The number of quality-adjusted life-years gained, and the cost in Japanese yen (¥) ($US1 = ¥120; 2015) was used as the outcome. Model inputs for clinical probabilities and related costs were based on our previous clinical trial results. Univariate sensitivity analyses were performed. Furthermore, a Bayesian multivariate probability sensitivity analysis was performed using Monte Carlo simulations with advanced PU management. Two different models were created for initial cohort distribution. For both models, the expected effectiveness for the intervention group using advanced PU management techniques was high, with a low expected cost value. The sensitivity analyses suggested that the results were robust. Intervention by WOCNs using advanced PU management techniques was more effective and cost-effective than conventional care. © 2015 by the Wound Healing Society.

  4. Genome-Wide Association Study of the Modified Stumvoll Insulin Sensitivity Index Identifies BCL2 and FAM19A2 as Novel Insulin Sensitivity Loci

    PubMed Central

    Gustafsson, Stefan; Rybin, Denis; Stančáková, Alena; Chen, Han; Liu, Ching-Ti; Hong, Jaeyoung; Jensen, Richard A.; Rice, Ken; Morris, Andrew P.; Mägi, Reedik; Tönjes, Anke; Prokopenko, Inga; Kleber, Marcus E.; Delgado, Graciela; Silbernagel, Günther; Jackson, Anne U.; Appel, Emil V.; Grarup, Niels; Lewis, Joshua P.; Montasser, May E.; Landenvall, Claes; Staiger, Harald; Luan, Jian’an; Frayling, Timothy M.; Weedon, Michael N.; Xie, Weijia; Morcillo, Sonsoles; Martínez-Larrad, María Teresa; Biggs, Mary L.; Chen, Yii-Der Ida; Corbaton-Anchuelo, Arturo; Færch, Kristine; Gómez-Zumaquero, Juan Miguel; Goodarzi, Mark O.; Kizer, Jorge R.; Koistinen, Heikki A.; Leong, Aaron; Lind, Lars; Lindgren, Cecilia; Machicao, Fausto; Manning, Alisa K.; Martín-Núñez, Gracia María; Rojo-Martínez, Gemma; Rotter, Jerome I.; Siscovick, David S.; Zmuda, Joseph M.; Zhang, Zhongyang; Serrano-Rios, Manuel; Smith, Ulf; Soriguer, Federico; Hansen, Torben; Jørgensen, Torben J.; Linnenberg, Allan; Pedersen, Oluf; Walker, Mark; Langenberg, Claudia; Scott, Robert A.; Wareham, Nicholas J.; Fritsche, Andreas; Häring, Hans-Ulrich; Stefan, Norbert; Groop, Leif; O’Connell, Jeff R.; Boehnke, Michael; Bergman, Richard N.; Collins, Francis S.; Mohlke, Karen L.; Tuomilehto, Jaakko; März, Winfried; Kovacs, Peter; Stumvoll, Michael; Psaty, Bruce M.; Kuusisto, Johanna; Laakso, Markku; Meigs, James B.; Dupuis, Josée; Ingelsson, Erik; Florez, Jose C.

    2016-01-01

    Genome-wide association studies (GWAS) have found few common variants that influence fasting measures of insulin sensitivity. We hypothesized that a GWAS of an integrated assessment of fasting and dynamic measures of insulin sensitivity would detect novel common variants. We performed a GWAS of the modified Stumvoll Insulin Sensitivity Index (ISI) within the Meta-Analyses of Glucose and Insulin-Related Traits Consortium. Discovery for genetic association was performed in 16,753 individuals, and replication was attempted for the 23 most significant novel loci in 13,354 independent individuals. Association with ISI was tested in models adjusted for age, sex, and BMI and in a model analyzing the combined influence of the genotype effect adjusted for BMI and the interaction effect between the genotype and BMI on ISI (model 3). In model 3, three variants reached genome-wide significance: rs13422522 (NYAP2; P = 8.87 × 10−11), rs12454712 (BCL2; P = 2.7 × 10−8), and rs10506418 (FAM19A2; P = 1.9 × 10−8). The association at NYAP2 was eliminated by conditioning on the known IRS1 insulin sensitivity locus; the BCL2 and FAM19A2 associations were independent of known cardiometabolic loci. In conclusion, we identified two novel loci and replicated known variants associated with insulin sensitivity. Further studies are needed to clarify the causal variant and function at the BCL2 and FAM19A2 loci. PMID:27416945

  5. Sensitivity to Change and Responsiveness of Four Balance Measures for Community-Dwelling Older Adults

    PubMed Central

    Latham, Nancy K.; Jette, Alan M.; Wagenaar, Robert C.; Ni, Pengsheng; Slavin, Mary D.; Bean, Jonathan F.

    2012-01-01

    Background Impaired balance has a significant negative impact on mobility, functional independence, and fall risk in older adults. Although several, well-respected balance measures are currently in use, there is limited evidence regarding the most appropriate measure to assess change in community-dwelling older adults. Objective The aim of this study was to compare floor and ceiling effects, sensitivity to change, and responsiveness across the following balance measures in community-dwelling elderly people with functional limitations: Berg Balance Scale (BBS), Performance-Oriented Mobility Assessment total scale (POMA-T), POMA balance subscale (POMA-B), and Dynamic Gait Index (DGI). Design Retrospective data from a 16-week exercise trial were used. Secondary analyses were conducted on the total sample and by subgroups of baseline functional limitation or baseline balance scores. Methods Participants were 111 community-dwelling older adults 65 years of age or older, with functional limitations. Sensitivity to change was assessed using effect size, standardized response mean, and paired t tests. Responsiveness was assessed using minimally important difference (MID) estimates. Results No floor effects were noted. Ceiling effects were observed on all measures, including in people with moderate to severe functional limitations. The POMA-T, POMA-B, and DGI showed significantly larger ceiling effects compared with the BBS. All measures had low sensitivity to change in total sample analyses. Subgroup analyses revealed significantly better sensitivity to change in people with lower compared with higher baseline balance scores. Although both the total sample and lower baseline balance subgroups showed statistically significant improvement from baseline to 16 weeks on all measures, only the lower balance subgroup showed change scores that consistently exceeded corresponding MID estimates. Limitations This study was limited to comparing 4 measures of balance, and anchor-based methods for assessing MID could not be reported. Conclusions Important limitations, including ceiling effects and relatively low sensitivity to change and responsiveness, were noted across all balance measures, highlighting their limited utility across the full spectrum of the community-dwelling elderly population. New, more challenging measures are needed for better discrimination of balance ability in community-dwelling elderly people at higher functional levels. PMID:22114200

  6. Integrating frequency and magnitude information in decision-making in schizophrenia: An account of patient performance on the Iowa Gambling Task.

    PubMed

    Brown, Elliot C; Hack, Samantha M; Gold, James M; Carpenter, William T; Fischer, Bernard A; Prentice, Kristen P; Waltz, James A

    2015-01-01

    The Iowa Gambling Task (IGT; Bechara et al., 1994) has frequently been used to assess risky decision making in clinical populations, including patients with schizophrenia (SZ). Poor performance on the IGT is often attributed to reduced sensitivity to punishment, which contrasts with recent findings from reinforcement learning studies in schizophrenia. In order to investigate possible sources of IGT performance deficits in SZ patients, we combined data from the IGT from 59 SZ patients and 43 demographically-matched controls with data from the Balloon Analog Risk Task (BART) in the same participants. Our analyses sought to specifically uncover the role of punishment sensitivity and delineate the capacity to integrate frequency and magnitude information in decision-making under risk. Although SZ patients, on average, made more choices from disadvantageous decks than controls did on the IGT, they avoided decks with frequent punishments at a rate similar to controls. Patients also exhibited excessive loss-avoidance behavior on the BART. We argue that, rather than stemming from reduced sensitivity to negative consequences, performance deficits on the IGT in SZ patients are more likely the result of a reinforcement learning deficit, specifically involving the integration of frequencies and magnitudes of rewards and punishments in the trial-by-trial estimation of expected value. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Integrating frequency and magnitude information in decision-making in schizophrenia: An account of patient performance on the Iowa Gambling Task

    PubMed Central

    Brown, Elliot C.; Hack, Samantha M.; Gold, James M.; Carpenter, William T.; Fischer, Bernard A.; Prentice, Kristen P.; Waltz, James A.

    2015-01-01

    Background The Iowa Gambling Task (IGT; Bechara, Damasio, Damasio, & Anderson, 1994) has frequently been used to assess risky decision making in clinical populations, including patients with schizophrenia (SZ). Poor performance on the IGT is often attributed to reduced sensitivity to punishment, which contrasts with recent findings from reinforcement learning studies in schizophrenia. Methods In order to investigate possible sources of IGT performance deficits in SZ patients, we combined data from the IGT from 59 SZ patients and 43 demographically-matched controls with data from the Balloon Analog Risk Task (BART) in the same participants. Our analyses sought to specifically uncover the role of punishment sensitivity and delineate the capacity to integrate frequency and magnitude information in decision-making under risk. Results Although SZ patients, on average, made more choices from disadvantageous decks than controls did on the IGT, they avoided decks with frequent punishments at a rate similar to controls. Patients also exhibited excessive loss-avoidance behavior on the BART. Conclusions We argue that, rather than stemming from reduced sensitivity to negative consequences, performance deficits on the IGT in SZ patients are more likely the result of a reinforcement learning deficit, specifically involving the integration of frequencies and magnitudes of rewards and punishments in the trial-by-trial estimation of expected value. PMID:25959618

  8. Behavioral metabolomics analysis identifies novel neurochemical signatures in methamphetamine sensitization

    PubMed Central

    Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.

    2014-01-01

    Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544

  9. A Customizable Flow Injection System for Automated, High Throughput, and Time Sensitive Ion Mobility Spectrometry and Mass Spectrometry Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orton, Daniel J.; Tfaily, Malak M.; Moore, Ronald J.

    To better understand disease conditions and environmental perturbations, multi-omic studies (i.e. proteomic, lipidomic, metabolomic, etc. analyses) are vastly increasing in popularity. In a multi-omic study, a single sample is typically extracted in multiple ways and numerous analyses are performed using different instruments. Thus, one sample becomes many analyses, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injection. While some FIA systems have been created to address these challenges, many have limitations such as high consumable costs, lowmore » pressure capabilities, limited pressure monitoring and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at diverse flow rates (~50 nL/min to 500 µL/min) to accommodate low- and high-flow instrument sources. This system can also operate at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system. The results from these studies showed a highly robust platform, providing consistent performance over many days without carryover as long as washing buffers specific to each molecular analysis were utilized.« less

  10. Cell Culture System for Analysis of Genetic Heterogeneity Within Hepatocellular Carcinomas and Response to Pharmacologic Agents.

    PubMed

    Gao, Qiang; Wang, Zhi-Chao; Duan, Meng; Lin, Yi-Hui; Zhou, Xue-Ya; Worthley, Daniel L; Wang, Xiao-Ying; Niu, Gang; Xia, Yuchao; Deng, Minghua; Liu, Long-Zi; Shi, Jie-Yi; Yang, Liu-Xiao; Zhang, Shu; Ding, Zhen-Bin; Zhou, Jian; Liang, Chun-Min; Cao, Ya; Xiong, Lei; Xi, Ruibin; Shi, Yong-Yong; Fan, Jia

    2017-01-01

    No targeted therapies have been found to be effective against hepatocellular carcinoma (HCC), possibly due to the large degree of intratumor heterogeneity. We performed genetic analyses of different regions of HCCs to evaluate levels of intratumor heterogeneity and associate alterations with responses to different pharmacologic agents. We obtained samples of HCCs (associated with hepatitis B virus infection) from 10 patients undergoing curative resection, before adjuvant therapy, at hospitals in China. We collected 4-9 spatially distinct samples from each tumor (55 regions total), performed histologic analyses, isolated cancer cells, and carried them low-passage culture. We performed whole-exome sequencing, copy-number analysis, and high-throughput screening of the cultured primary cancer cells. We tested responses of an additional 105 liver cancer cell lines to a fibroblast growth factor receptor (FGFR) 4 inhibitor. We identified a total of 3670 non-silent mutations (3192 missense, 94 splice-site variants, and 222 insertions or deletions) in the tumor samples. We observed considerable intratumor heterogeneity and branched evolution in all 10 tumors; the mean percentage of heterogeneous mutations in each tumor was 39.7% (range, 12.9%-68.5%). We found significant mutation shifts toward C>T and C>G substitutions in branches of phylogenetic trees among samples from each tumor (P < .0001). Of note, 14 of the 26 oncogenic alterations (53.8%) varied among subclones that mapped to different branches. Genetic alterations that can be targeted by existing pharmacologic agents (such as those in FGF19, DDR2, PDGFRA, and TOP1) were identified in intratumor subregions from 4 HCCs and were associated with sensitivity to these agents. However, cells from the remaining subregions, which did not have these alterations, were not sensitive to these drugs. High-throughput screening identified pharmacologic agents to which these cells were sensitive, however. Overexpression of FGF19 correlated with sensitivity of cells to an inhibitor of FGFR 4; this observation was validated in 105 liver cancer cell lines (P = .0024). By analyzing genetic alterations in different tumor regions of 10 HCCs, we observed extensive intratumor heterogeneity. Our patient-derived cell line-based model, integrating genetic and pharmacologic data from multiregional cancer samples, provides a platform to elucidate how intratumor heterogeneity affects sensitivity to different therapeutic agents. Copyright © 2017 AGA Institute. Published by Elsevier Inc. All rights reserved.

  11. Behavioral Training as New Treatment for Adult Amblyopia: A Meta-Analysis and Systematic Review.

    PubMed

    Tsirlin, Inna; Colpa, Linda; Goltz, Herbert C; Wong, Agnes M F

    2015-06-01

    New behavioral treatment methods, including dichoptic training, perceptual learning, and video gaming, have been proposed to improve visual function in adult amblyopia. Here, we conducted a meta-analysis of these methods to investigate the factors involved in amblyopia recovery and their clinical significance. Mean and individual participant data meta-analyses were performed on 24 studies using the new behavioral methods in adults. Studies were identified using PubMed, Google Scholar, and published reviews. The new methods yielded a mean improvement in visual acuity of 0.17 logMAR with 32% participants achieving gains ≥ 0.2 logMAR, and a mean improvement in stereo sensitivity of 0.01 arcsec-1 with 42% of participants improving ≥2 octaves. The most significant predictor of treatment outcome was visual acuity at the onset of treatment. Participants with more severe amblyopia improved more on visual acuity and less on stereo sensitivity than those with milder amblyopia. Better initial stereo sensitivity was a predictor of greater gains in stereo sensitivity following treatment. Treatment type, amblyopia type, age, and training duration did not have any significant influence on visual and stereo acuity outcomes. Our analyses showed that some participants may benefit from the new treatments; however, clinical trials are required to confirm these findings. Despite the diverse nature of the new behavioral methods, the lack of significant differences in visual and stereo sensitivity outcomes among them suggests that visual attention-a common element among the varied treatment methods-may play an important role in amblyopia recovery.

  12. Comparative Analysis of Normalised Difference Spectral Indices Derived from MODIS for Detecting Surface Water in Flooded Rice Cropping Systems

    PubMed Central

    Boschetti, Mirco; Nutini, Francesco; Manfron, Giacinto; Brivio, Pietro Alessandro; Nelson, Andrew

    2014-01-01

    Identifying managed flooding in paddy fields is commonly used in remote sensing to detect rice. Such flooding, followed by rapid vegetation growth, is a reliable indicator to discriminate rice. Spectral indices (SIs) are often used to perform this task. However, little work has been done on determining which spectral combination in the form of Normalised Difference Spectral Indices (NDSIs) is most appropriate for surface water detection or which thresholds are most robust to separate water from other surfaces in operational contexts. To address this, we conducted analyses on satellite and field spectral data from an agronomic experiment as well as on real farming situations with different soil and plant conditions. Firstly, we review and select NDSIs proposed in the literature, including a new combination of visible and shortwave infrared bands. Secondly, we analyse spectroradiometric field data and satellite data to evaluate mixed pixel effects. Thirdly, we analyse MODIS data and Landsat data at four sites in Europe and Asia to assess NDSI performance in real-world conditions. Finally, we test the performance of the NDSIs on MODIS temporal profiles in the four sites. We also compared the NDSIs against a combined index previously used for agronomic flood detection. Analyses suggest that NDSIs using MODIS bands 4 and 7, 1 and 7, 4 and 6 or 1 and 6 perform best. A common threshold for each NDSI across all sites was more appropriate than locally adaptive thresholds. In general, NDSIs that use band 7 have a negligible increase in Commission Error over those that use band 6 but are more sensitive to water presence in mixed land cover conditions typical of moderate spatial resolution analyses. The best performing NDSI is comparable to the combined index but with less variability in performance across sites, suggesting a more succinct and robust flood detection method. PMID:24586381

  13. Pharmacoeconomic evaluation of fluconazole, posaconazole and voriconazole for antifungal prophylaxis in patients with acute myeloid leukaemia undergoing first consolidation chemotherapy.

    PubMed

    Heng, Siow-Chin; Slavin, Monica A; Al-Badriyeh, Daoud; Kirsa, Sue; Seymour, John F; Grigg, Andrew; Thursky, Karin; Bajel, Ashish; Nation, Roger L; Kong, David C M

    2013-07-01

    Fluconazole, posaconazole and voriconazole are used prophylactically in patients with acute myeloid leukaemia (AML). This study evaluated the clinical and economic outcomes of these agents when used in AML patients undergoing consolidation chemotherapy. A retrospective chart review (2003-10) of AML patients receiving consolidation chemotherapy was performed. Patients were followed through their first cycle of consolidation chemotherapy. Antifungal prescribing patterns, clinical outcomes and resource consumptions were recorded. A decision analytical model was developed to depict the downstream consequences of using each antifungal agent, with success defined as completion of the designated course of initial antifungal prophylaxis without developing invasive fungal disease (IFD). Cost-effectiveness and sensitivity analyses were performed. A total of 106 consecutive patients were analysed. Baseline characteristics and predisposing factors for IFD were comparable between groups. Three IFDs (one proven, one probable and one suspected) occurred, all in the posaconazole group. Patients receiving posaconazole had the highest rate of intolerance requiring drug cessation (13% versus 7% in each of the fluconazole and voriconazole groups). Fluconazole conferred overall savings per patient of 26% over posaconazole and 13% over voriconazole. Monte Carlo simulation demonstrated a mean cost saving with fluconazole of AU$8430 per patient (95% CI AU$5803-AU$11 054) versus posaconazole and AU$3681 per patient (95% CI AU$990-AU$6319) versus voriconazole. One-way sensitivity analyses confirmed the robustness of the model. This is the first study to show that, in the setting of consolidation therapy for AML, fluconazole is the most cost-effective approach to antifungal prophylaxis compared with posaconazole or voriconazole.

  14. [Comparison of simple pooling and bivariate model used in meta-analyses of diagnostic test accuracy published in Chinese journals].

    PubMed

    Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan

    2015-06-18

    To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.

  15. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines

    PubMed Central

    Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.

    2017-01-01

    Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445

  16. Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.

    PubMed

    Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H

    2017-04-01

    Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  17. Kastle-Meyer blood test reagents are deleterious to DNA.

    PubMed

    Sloots, James; Lalonde, Wendy; Reid, Barbara; Millman, Jonathan

    2017-12-01

    The Kastle-Meyer (KM) test is a quick and easy chemical test for blood used in forensic analyses. Two practical variations of this test are the KM-rub (indirect) test and the more sensitive KM-direct test, the latter of which is performed by applying reagents directly to a suspected blood stain. This study found that sodium hydroxide present in the KM reagents eliminated the potential to generate a DNA profile when applied directly to small quantities of blood. A modified approach to the KM-rub test that increases its sensitivity is presented as a method to replace destructive KM-direct testing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.

  18. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  19. Characterizing the binaural contribution to speech-in-noise reception in elderly hearing-impaired listeners.

    PubMed

    Neher, Tobias

    2017-02-01

    To scrutinize the binaural contribution to speech-in-noise reception, four groups of elderly participants with or without audiometric asymmetry <2 kHz and with or without near-normal binaural intelligibility level difference (BILD) completed tests of monaural and binaural phase sensitivity as well as cognitive function. Groups did not differ in age, overall degree of hearing loss, or cognitive function. Analyses revealed an influence of BILD status but not audiometric asymmetry on monaural phase sensitivity, strong correlations between monaural and binaural detection thresholds, and monaural and binaural but not cognitive BILD contributions. Furthermore, the N 0 S π threshold at 500 Hz predicted BILD performance effectively.

  20. The psychometrics of mental workload: multiple measures are sensitive but divergent.

    PubMed

    Matthews, Gerald; Reinerman-Jones, Lauren E; Barber, Daniel J; Abich, Julian

    2015-02-01

    A study was run to test the sensitivity of multiple workload indices to the differing cognitive demands of four military monitoring task scenarios and to investigate relationships between indices. Various psychophysiological indices of mental workload exhibit sensitivity to task factors. However, the psychometric properties of multiple indices, including the extent to which they intercorrelate, have not been adequately investigated. One hundred fifty participants performed in four task scenarios based on a simulation of unmanned ground vehicle operation. Scenarios required threat detection and/or change detection. Both single- and dual-task scenarios were used. Workload metrics for each scenario were derived from the electroencephalogram (EEG), electrocardiogram, transcranial Doppler sonography, functional near infrared, and eye tracking. Subjective workload was also assessed. Several metrics showed sensitivity to the differing demands of the four scenarios. Eye fixation duration and the Task Load Index metric derived from EEG were diagnostic of single-versus dual-task performance. Several other metrics differentiated the two single tasks but were less effective in differentiating single- from dual-task performance. Psychometric analyses confirmed the reliability of individual metrics but failed to identify any general workload factor. An analysis of difference scores between low- and high-workload conditions suggested an effort factor defined by heart rate variability and frontal cortex oxygenation. General workload is not well defined psychometrically, although various individual metrics may satisfy conventional criteria for workload assessment. Practitioners should exercise caution in using multiple metrics that may not correspond well, especially at the level of the individual operator.

  1. Type-specific identification of anogenital herpes simplex virus infections by use of a commercially available nucleic acid amplification test.

    PubMed

    Van Der Pol, Barbara; Warren, Terri; Taylor, Stephanie N; Martens, Mark; Jerome, Keith R; Mena, Leandro; Lebed, Joel; Ginde, Savita; Fine, Paul; Hook, Edward W

    2012-11-01

    Herpes infections are among the most common sexually transmitted infections (STI), but diagnostic methods for genital herpes have not kept pace with the movement toward molecular testing. Here, we describe an FDA-approved molecular assay that identifies and types herpes simplex virus (HSV) infections for use in routine clinical settings. Paired samples from anogenital lesions were tested using the BD ProbeTec HSV Q(x) (HSVQ(x)) system, HSV culture and, a laboratory-developed PCR assay. Family planning, obstetrics/gynecology (OB/GYN), or sexually transmitted disease (STD) clinics in the United States served as recruitment sites. Sensitivity and specificity estimates, head-to-head comparisons, measures of agreement, and latent-class analyses were performed to provide robust estimates of performance. A total of 508 participants (174 men and 334 women) with anogenital lesions were included; 260 HSV-2 and 73 HSV-1 infections were identified. No differences in test performance based on gender, clinic type, location of the lesion, or type of lesion were observed. The sensitivity of HSV-2 detection ranged from 98.4 to 100% depending on the analytical approach, while the specificity ranged from 80.6%, compared to the less sensitive culture method, to 97.0%, compared to PCR. For HSV-1, the sensitivity and specificity ranges were 96.7 to 100% and 95.1 to 99.4%, respectively. This assay may improve our ability to accurately diagnose anogenital lesions due to herpes infection.

  2. Can paediatric early warning scores (PEWS) be used to guide the need for hospital admission and predict significant illness in children presenting to the emergency department? An assessment of PEWS diagnostic accuracy using sensitivity and specificity.

    PubMed

    Lillitos, Peter J; Hadley, Graeme; Maconochie, Ian

    2016-05-01

    Designed to detect early deterioration of the hospitalised child, paediatric early warning scores (PEWS) validity in the emergency department (ED) is less validated. We aimed to evaluate sensitivity and specificity of two commonly used PEWS (Brighton and COAST) in predicting hospital admission and, for the first time, significant illness. Retrospective analysis of PEWS data for paediatric ED attendances at St Mary's Hospital, London, UK, in November 2012. Patients with missing data were excluded. Diagnoses were grouped: medical and surgical. To classify diagnoses as significant, established guidelines were used and, where not available, common agreement between three acute paediatricians. 1921 patients were analysed. There were 211 admissions (11%). 1630 attendances were medical (86%) and 273 (14%) surgical. Brighton and COAST PEWS performed similarly. hospital admission: PEWS of ≥3 was specific (93%) but poorly sensitive (32%). The area under the receiver operating curve (AUC) was low at 0.690. Significant illness: for medical illness, PEWS ≥3 was highly specific (96%) but poorly sensitive (44%). The AUC was 0.754 and 0.755 for Brighton and COAST PEWS, respectively. Both scores performed poorly for predicting significant surgical illness (AUC 0.642). PEWS ≥3 performed well in predicting significant respiratory illness: sensitivity 75%, specificity 91%. Both Brighton and COAST PEWS scores performed similarly. A score of ≥3 has good specificity but poor sensitivity for predicting hospital admission and significant illness. Therefore, a high PEWS should be taken seriously but a low score is poor at ruling out the requirement for admission or serious underlying illness. PEWS was better at detecting significant medical illness compared with detecting the need for admission. PEWS performed poorly in detecting significant surgical illness. PEWS may be particularly useful in evaluating respiratory illness in a paediatric ED. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Parameterization and Sensitivity Analysis of a Complex Simulation Model for Mosquito Population Dynamics, Dengue Transmission, and Their Control

    PubMed Central

    Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.

    2011-01-01

    Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844

  4. Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.

    2001-01-01

    This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  5. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals.

    PubMed

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC(10)) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate "pattern-only" PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress.

  6. Proteomic Signatures of the Zebrafish (Danio rerio) Embryo: Sensitivity and Specificity in Toxicity Assessment of Chemicals

    PubMed Central

    Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike

    2010-01-01

    Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC10) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate “pattern-only” PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress. PMID:22084678

  7. Structural, optical and photovoltaic properties of co-doped CdTe QDs for quantum dots sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Ayyaswamy, Arivarasan; Ganapathy, Sasikala; Alsalme, Ali; Alghamdi, Abdulaziz; Ramasamy, Jayavel

    2015-12-01

    Zinc and sulfur alloyed CdTe quantum dots (QDs) sensitized TiO2 photoelectrodes have been fabricated for quantum dots sensitized solar cells. Alloyed CdTe QDs were prepared in aqueous phase using mercaptosuccinic acid (MSA) as a capping agent. The influence of co-doping on the structural property of CdTe QDs was studied by XRD analysis. The enhanced optical absorption of alloyed CdTe QDs was studied using UV-vis absorption and fluorescence emission spectra. The capping of MSA molecules over CdTe QDs was confirmed by the FTIR and XPS analyses. Thermogravimetric analysis confirms that the prepared QDs were thermally stable up to 600 °C. The photovoltaic performance of alloyed CdTe QDs sensitized TiO2 photoelectrodes were studied using J-V characteristics under the illumination of light with 1 Sun intensity. These results show the highest photo conversion efficiency of η = 1.21%-5% Zn & S alloyed CdTe QDs.

  8. Reward sensitivity predicts ice cream-related attentional bias assessed by inattentional blindness.

    PubMed

    Li, Xiaoming; Tao, Qian; Fang, Ya; Cheng, Chen; Hao, Yangyang; Qi, Jianjun; Li, Yu; Zhang, Wei; Wang, Ying; Zhang, Xiaochu

    2015-06-01

    The cognitive mechanism underlying the association between individual differences in reward sensitivity and food craving is unknown. The present study explored the mechanism by examining the role of reward sensitivity in attentional bias toward ice cream cues. Forty-nine college students who displayed high level of ice cream craving (HICs) and 46 who displayed low level of ice cream craving (LICs) performed an inattentional blindness (IB) task which was used to assess attentional bias for ice cream. In addition, reward sensitivity and coping style were assessed by the Behavior Inhibition System/Behavior Activation System Scales and Simplified Coping Style Questionnaire. Results showed significant higher identification rate of the critical stimulus in the HICs than LICs, suggesting greater attentional bias for ice cream in the HICs. It was indicated that attentional bias for food cues persisted even under inattentional condition. Furthermore, a significant correlation was found between the attentional bias and reward sensitivity after controlling for coping style, and reward sensitivity predicted attentional bias for food cues. The mediation analyses showed that attentional bias mediated the relationship between reward sensitivity and food craving. Those findings suggest that the association between individual differences in reward sensitivity and food craving may be attributed to attentional bias for food-related cues. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Steady State Thermal Analyses of SCEPTOR X-57 Wingtip Propulsion

    NASA Technical Reports Server (NTRS)

    Schnulo, Sydney L.; Chin, Jeffrey C.; Smith, Andrew D.; Dubois, Arthur

    2017-01-01

    Electric aircraft concepts enable advanced propulsion airframe integration approaches that promise increased efficiency as well as reduced emissions and noise. NASA's fully electric Maxwell X-57, developed under the SCEPTOR program, features distributed propulsion across a high aspect ratio wing. There are 14 propulsors in all: 12 high lift motor that are only active during take off and climb, and 2 larger motors positioned on the wingtips that operate over the entire mission. The power electronics involved in the wingtip propulsion are temperature sensitive and therefore require thermal management. This work focuses on the high and low fidelity heat transfer analysis methods performed to ensure that the wingtip motor inverters do not reach their temperature limits. It also explores different geometry configurations involved in the X-57 development and any thermal concerns. All analyses presented are performed at steady state under stressful operating conditions, therefore predicting temperatures which are considered the worst-case scenario to remain conservative.

  10. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Technical Reports Server (NTRS)

    1981-01-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  11. Inelastic neutron scattering experiments with the monochromatic imaging mode of the RITA-II spectrometer

    NASA Astrophysics Data System (ADS)

    Bahl, C. R. H.; Lefmann, K.; Abrahamsen, A. B.; Rønnow, H. M.; Saxild, F.; Jensen, T. B. S.; Udby, L.; Andersen, N. H.; Christensen, N. B.; Jakobsen, H. S.; Larsen, T.; Häfliger, P. S.; Streule, S.; Niedermayer, Ch.

    2006-05-01

    Recently a monochromatic multiple data taking mode has been demonstrated for diffraction experiments using a RITA type cold neutron spectrometer with a multi-bladed analyser and a position-sensitive detector. Here, we show how this mode can be used in combination with a flexible radial collimator to perform real inelastic neutron scattering experiments. We present the results from inelastic powder, single crystal dispersion and single crystal constant energy mapping experiments. The advantages and complications of performing these experiments are discussed along with a comparison between the imaging mode and the traditional monochromatic focussing mode.

  12. Effects off system factors on the economics of and demand for small solar thermal power systems

    NASA Astrophysics Data System (ADS)

    1981-09-01

    Market penetration as a function time, SPS performance factors, and market/economic considerations was estimated, and commercialization strategies were formulated. A market analysis task included personal interviews and supplemental mail surveys to acquire statistical data and to identify and measure attitudes, reactions and intentions of prospective SPS users. Interviews encompassed three ownership classes of electric utilities and industrial firms in the SIC codes for energy consumption. A market demand model was developed which utilized the data base developed, and projected energy price and consumption data to perform sensitivity analyses and estimate potential market for SPS.

  13. CFD analysis of jet mixing in low NOx flametube combustors

    NASA Technical Reports Server (NTRS)

    Talpallikar, M. V.; Smith, C. E.; Lai, M. C.; Holdeman, J. D.

    1991-01-01

    The Rich-burn/Quick-mix/Lean-burn (RQL) combustor was identified as a potential gas turbine combustor concept to reduce NO(x) emissions in High Speed Civil Transport (HSCT) aircraft. To demonstrate reduced NO(x) levels, cylindrical flametube versions of RQL combustors are being tested at NASA Lewis Research Center. A critical technology needed for the RQL combustor is a method of quickly mixing by-pass combustion air with rich-burn gases. Jet mixing in a cylindrical quick-mix section was numerically analyzed. The quick-mix configuration was five inches in diameter and employed twelve radial-inflow slots. The numerical analyses were performed with an advanced, validated 3-D Computational Fluid Dynamics (CFD) code named REFLEQS. Parametric variation of jet-to-mainstream momentum flux ratio (J) and slot aspect ratio was investigated. Both non-reacting and reacting analyses were performed. Results showed mixing and NO(x) emissions to be highly sensitive to J and slot aspect ratio. Lowest NO(x) emissions occurred when the dilution jet penetrated to approximately mid-radius. The viability of using 3-D CFD analyses for optimizing jet mixing was demonstrated.

  14. CFD analysis of jet mixing in low NO(x) flametube combustors

    NASA Technical Reports Server (NTRS)

    Talpallikar, M. V.; Smith, C. E.; Lai, M. C.; Holdeman, J. D.

    1991-01-01

    The Rich-burn/Quick-mix/Lean-burn (RQL) combustor has been identified as a potential gas turbine combustor concept to reduce NO(x) emissions in High Speed Civil Transport (HSCT) aircraft. To demonstrate reduced NO(x) levels, cylindrical flametube versions of RQL combustors are being tested at NASA Lewis Research Center. A critical technology needed for the RQL combustor is a method of quickly mixing by-pass combustion air with rich-burn gases. Jet mixing in a cylindrical quick-mix section was numerically analyzed. The quick-mix configuration was five inches in diameter and employed twelve radial-inflow slots. The numerical analyses were performed with an advanced, validated 3D Computational Fluid Dynamics (CFD) code named REFLEQS. Parametric variation of jet-to-mainstream momentum flux ratio (J) and slot aspect ratio was investigated. Both non-reacting and reacting analyses were performed. Results showed mixing and NO(x) emissions to be highly sensitive to J and slot aspect ratio. Lowest NO(x) emissions occurred when the dilution jet penetrated to approximately mid-radius. The viability of using 3D CFD analyses for optimizing jet mixing was demonstrated.

  15. Maternal sensitivity: a concept analysis.

    PubMed

    Shin, Hyunjeong; Park, Young-Joo; Ryu, Hosihn; Seomun, Gyeong-Ae

    2008-11-01

    The aim of this paper is to report a concept analysis of maternal sensitivity. Maternal sensitivity is a broad concept encompassing a variety of interrelated affective and behavioural caregiving attributes. It is used interchangeably with the terms maternal responsiveness or maternal competency, with no consistency of use. There is a need to clarify the concept of maternal sensitivity for research and practice. A search was performed on the CINAHL and Ovid MEDLINE databases using 'maternal sensitivity', 'maternal responsiveness' and 'sensitive mothering' as key words. The searches yielded 54 records for the years 1981-2007. Rodgers' method of evolutionary concept analysis was used to analyse the material. Four critical attributes of maternal sensitivity were identified: (a) dynamic process involving maternal abilities; (b) reciprocal give-and-take with the infant; (c) contingency on the infant's behaviour and (d) quality of maternal behaviours. Maternal identity and infant's needs and cues are antecedents for these attributes. The consequences are infant's comfort, mother-infant attachment and infant development. In addition, three positive affecting factors (social support, maternal-foetal attachment and high self-esteem) and three negative affecting factors (maternal depression, maternal stress and maternal anxiety) were identified. A clear understanding of the concept of maternal sensitivity could be useful for developing ways to enhance maternal sensitivity and to maximize the developmental potential of infants. Knowledge of the attributes of maternal sensitivity identified in this concept analysis may be helpful for constructing measuring items or dimensions.

  16. Comparison of Two Global Sensitivity Analysis Methods for Hydrologic Modeling over the Columbia River Basin

    NASA Astrophysics Data System (ADS)

    Hameed, M.; Demirel, M. C.; Moradkhani, H.

    2015-12-01

    Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.

  17. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  18. hhjj production at the LHC

    DOE PAGES

    Dolan, Matthew J.; Englert, Christoph; Greiner, Nicolas; ...

    2015-08-25

    The search for di-Higgs production at the LHC in order to set limits on the Higgs trilinear coupling and constraints on new physics is one of the main motivations for the LHC high-luminosity phase. Recent experimental analyses suggest that such analyses will only be successful if information from a range of channels is included. We therefore investigate di-Higgs production in association with two hadronic jets and give a detailed discussion of both the gluon- and the weak boson-fusion (WBF) contributions, with a particular emphasis on the phenomenology with modified Higgs trilinear and quartic gauge couplings. We perform a detailed investigationmore » of the full hadronic final state and find that hhjj production should add sensitivity to a di-Higgs search combination at the HL-LHC with 3 ab -1. Since the WBF and GF contributions are sensitive to different sources of physics beyond the Standard Model, we devise search strategies to disentangle and isolate these production modes. In addition, while gluon fusion remains non-negligible in WBF-type selections, sizeable new physics contributions to the latter can still be constrained. As an example of the latter point we investigate the sensitivity that can be obtained for a measurement of the quartic Higgs–gauge boson couplings.« less

  19. The Anxiety Sensitivity Index--Revised: Confirmatory Factor Analyses, Structural Invariance in Caucasian and African American Samples, and Score Reliability and Validity

    ERIC Educational Resources Information Center

    Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.

    2009-01-01

    The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…

  20. Clinical effects of pre-adjusted edgewise orthodontic brackets: a systematic review and meta-analysis.

    PubMed

    Papageorgiou, Spyridon N; Konstantinidis, Ioannis; Papadopoulou, Konstantina; Jäger, Andreas; Bourauel, Christoph

    2014-06-01

    Fixed-appliance treatment is a major part of orthodontic treatment, but clinical evidence remains scarce. Objective of this systematic review was to investigate how the therapeutic effects and side-effects of brackets used during the fixed-appliance orthodontic treatment are affected by their characteristics. SEARCH METHODS AND SELECTION CRITERIA: We searched MEDLINE and 18 other databases through April 2012 without restrictions for randomized controlled trials and quasi-randomized controlled trials investigating any bracket characteristic. After duplicate selection and extraction procedures, risk of bias was assessed also in duplicate according to Cochrane guidelines and quality of evidence according to the Grades of Recommendation. Assessment, Development and Evaluation approach. Random-effects meta-analyses, subgroup analyses, and sensitivity analyses were performed with the corresponding 95 per cent confidence intervals (CI) and 95 per cent prediction intervals (PI). We included 25 trials on 1321 patients, with most comparing self-ligated (SL) and conventional brackets. Based on the meta-analyses, the duration of orthodontic treatment was on average 2.01 months longer among patients with SL brackets (95 per cent CI: 0.45 to 3.57). The 95 per cent PIs for a future trial indicated that the difference could be considerable (-1.46 to 5.47 months). Treatment characteristics, outcomes, and side-effects were clinically similar between SL and conventional brackets. For most bracket characteristics, evidence is insufficient. Some meta-analyses included trials with high risk of bias, but sensitivity analyses indicated robustness. Based on existing evidence, no clinical recommendation can be made regarding the bracket material or different ligation modules. For SL brackets, no conclusive benefits could be proven, while their use was associated with longer treatment durations.

  1. Ceramic automotive Stirling engine study

    NASA Technical Reports Server (NTRS)

    Musikant, S.; Chiu, W.; Darooka, D.; Mullings, D. M.; Johnson, C. A.

    1985-01-01

    A conceptual design study for a Ceramic Automotive Stirling Engine (CASE) is performed. Year 1990 structural ceramic technology is assumed. Structural and performance analyses of the conceptual design are performed as well as a manufacturing and cost analysis. The general conclusions from this study are that such an engine would be 10-26% more efficient over its performance map than the current metal Automotive Stirling Reference Engine (ASRE). Cost of such a ceramic engine is likely to be somewhat higher than that of the ASRE but engine cost is very sensitive to the ultimate cost of the high purity, ceramic powder raw materials required to fabricate high performance parts. When the design study is projected to the year 2000 technology, substantinal net efficiency improvements, on the order of 25 to 46% over the ASRE, are computed.

  2. Solar and Magnetic Attitude Determination for Small Spacecraft

    NASA Technical Reports Server (NTRS)

    Woodham, Kurt; Blackman, Kathie; Sanneman, Paul

    1997-01-01

    During the Phase B development of the NASA New Millennium Program (NMP) Earth Orbiter-1 (EO-1) spacecraft, detailed analyses were performed for on-board attitude determination using the Sun and the Earth's magnetic field. This work utilized the TRMM 'Contingency Mode' as a starting point but concentrated on implementation for a small spacecraft without a high performance mechanical gyro package. The analyses and simulations performed demonstrate a geographic dependence due to diurnal variations in the Earth magnetic field with respect to the Sun synchronous, nearly polar orbit. Sensitivity to uncompensated residual magnetic fields of the spacecraft and field modeling errors is shown to be the most significant obstacle for maximizing performance. Performance has been evaluated with a number of inertial reference units and various mounting orientations for the two-axis Fine Sun Sensors. Attitude determination accuracy using the six state Kalman Filter executing at 2 Hz is approximately 0.2 deg, 3-sigma, per axis. Although EO-1 was subsequently driven to a stellar-based attitude determination system as a result of tighter pointing requirements, solar/magnetic attitude determination is demonstrated to be applicable to a range of small spacecraft with medium precision pointing requirements.

  3. Dandelions, tulips and orchids: evidence for the existence of low-sensitive, medium-sensitive and high-sensitive individuals.

    PubMed

    Lionetti, Francesca; Aron, Arthur; Aron, Elaine N; Burns, G Leonard; Jagiellowicz, Jadzia; Pluess, Michael

    2018-01-22

    According to empirical studies and recent theories, people differ substantially in their reactivity or sensitivity to environmental influences with some being generally more affected than others. More sensitive individuals have been described as orchids and less-sensitive ones as dandelions. Applying a data-driven approach, we explored the existence of sensitivity groups in a sample of 906 adults who completed the highly sensitive person (HSP) scale. According to factor analyses, the HSP scale reflects a bifactor model with a general sensitivity factor. In contrast to prevailing theories, latent class analyses consistently suggested the existence of three rather than two groups. While we were able to identify a highly sensitive (orchids, 31%) and a low-sensitive group (dandelions, 29%), we also detected a third group (40%) characterised by medium sensitivity, which we refer to as tulips in keeping with the flower metaphor. Preliminary cut-off scores for all three groups are provided. In order to characterise the different sensitivity groups, we investigated group differences regarding the Big Five personality traits, as well as experimentally assessed emotional reactivity in an additional independent sample. According to these follow-up analyses, the three groups differed in neuroticism, extraversion and emotional reactivity to positive mood induction with orchids scoring significantly higher in neuroticism and emotional reactivity and lower in extraversion than the other two groups (dandelions also differed significantly from tulips). Findings suggest that environmental sensitivity is a continuous and normally distributed trait but that people fall into three distinct sensitive groups along a sensitivity continuum.

  4. Tea consumption and risk of type 2 diabetes mellitus: a systematic review and meta-analysis update

    PubMed Central

    Yang, Jian; Mao, Qun-Xia; Xu, Hong-Xia; Ma, Xu; Zeng, Chun-Yu

    2014-01-01

    Objective Tea has been suggested to decrease blood glucose levels and protect pancreatic β cells in diabetic mice. However, human epidemiological studies showed inconsistent results for the association between tea consumption and type 2 diabetes mellitus (T2DM) risk. The aim of this study was to conduct a meta-analysis to further explore the association between tea consumption and incidence of T2DM. Design Systematic review and meta-analysis. Methods We performed a systematic literature search up to 30 August 2013 in PubMed, EMBASE, Chinese Wanfang Database and CNKI database. Pooling relative risks (RRs) were estimated by random-effect models. Two kinds of subgroup analyses (according to sex and regions) were performed. Sensitive analyses were performed according to types of tea. Results Overall, no statistically significant relationship between tea consumption and risk of T2DM was found based on 12 eligible studies (pooling RR 0.99, 95% CI 0.95 to 1.03). Compared with the lowest/non-tea group, daily tea consumption (≥3 cups/day) was associated with a lower T2DM risk (RR 0.84, 95% CI 0.73 to 0.97). Subgroup analyses showed a difference between men and women. Overall, the RRs (95% CI) were 0.92 (0.84 to 1.00) for men, and 1.00 (0.96 to 1.05) for women, respectively. Tea consumption of ≥3 cups/day was associated with decreased T2DM risk in women (RR 0.84, 95% CI 0.71 to 1.00). Overall, the RRs (95% CIs) were 0.84 (0.71 to 1.00) for Asians, and 1.00 (0.97 to 1.04) for Americans and Europeans, respectively. No obvious change was found in sensitivity analyses. Conclusions The results suggest that daily tea consumption (≥3 cups/day) is associated with a lower T2DM risk. However, further studies are needed to enrich related evidence, especially with regard to types of tea or sex. PMID:25052177

  5. Evaluation of risk factors for perforated peptic ulcer.

    PubMed

    Yamamoto, Kazuki; Takahashi, Osamu; Arioka, Hiroko; Kobayashi, Daiki

    2018-02-15

    The aim of this study was to evaluate the prediction factors for perforated peptic ulcer (PPU). At St. Luke's International Hospital in Tokyo, Japan, a case control study was performed between August 2004 and March 2016. All patients diagnosed with PPU were included. As control subjects, patients with age, sex and date of CT scan corresponding to those of the PPU subjects were included in the study at a proportion of 2 controls for every PPU subject. All data such as past medical histories, physical findings, and laboratory data were collected through chart reviews. Univariate analyses and multivariate analyses with logistic regression were conducted, and receiver operating characteristic curves (ROCs) were calculated to show validity. Sensitivity analyses were performed to confirm results using a stepwise method and conditional logistic regression. A total of 408 patients were included in this study; 136 were a group of patients with PPU, and 272 were a control group. Univariate analysis showed statistical significance in many categories. Four different models of multivariate analyses were conducted, and significant differences were found for muscular defense and a history of peptic ulcer disease (PUD) in all models. The conditional forced-entry analysis of muscular defense showed an odds ratio (OR) of 23.8 (95% confidence interval [CI]: 5.70-100.0), and the analysis of PUD history showed an OR of 6.40 (95% CI: 1.13-36.2). The sensitivity analysis showed consistent results, with an OR of 23.8-366.2 for muscular defense and an OR of 3.67-7.81 for PUD history. The area under the curve (AUC) of all models was high enough to confirm the results. However, anticoagulants, known risk factors for PUD, did not increase the risk for PPU in our study. The conditional forced-entry analysis of anticoagulant use showed an OR of 0.85 (95% CI: 0.03-22.3). The evaluation of prediction factors and development of a prediction rule for PPU may help our decision making in performing a CT scan for patients with acute abdominal pain.

  6. Quantitative aspects of inductively coupled plasma mass spectrometry

    NASA Astrophysics Data System (ADS)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  7. Percolation analyses of observed and simulated galaxy clustering

    NASA Astrophysics Data System (ADS)

    Bhavsar, S. P.; Barrow, J. D.

    1983-11-01

    A percolation cluster analysis is performed on equivalent regions of the CFA redshift survey of galaxies and the 4000 body simulations of gravitational clustering made by Aarseth, Gott and Turner (1979). The observed and simulated percolation properties are compared and, unlike correlation and multiplicity function analyses, favour high density (Omega = 1) models with n = - 1 initial data. The present results show that the three-dimensional data are consistent with the degree of filamentary structure present in isothermal models of galaxy formation at the level of percolation analysis. It is also found that the percolation structure of the CFA data is a function of depth. Percolation structure does not appear to be a sensitive probe of intrinsic filamentary structure.

  8. Sensitivity analysis of complex coupled systems extended to second and higher order derivatives

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    In design of engineering systems, the what if questions often arise such as: what will be the change of the aircraft payload, if the wing aspect ratio is incremented by 10 percent. Answers to such questions are commonly sought by incrementing the pertinent variable, and reevaluating the major disciplinary analyses involved. These analyses are contributed by engineering disciplines that are, usually, coupled, as are the aerodynamics, structures, and performance in the context of the question above. The what if questions can be answered precisely by computation of the derivatives. A method for calculation of the first derivatives has been developed previously. An algorithm is presented for calculation of the second and higher order derivatives.

  9. The CERAD Neuropsychological Assessment Battery Is Sensitive to Alcohol-Related Cognitive Deficiencies in Elderly Patients: A Retrospective Matched Case-Control Study.

    PubMed

    Kaufmann, Liane; Huber, Stefan; Mayer, Daniel; Moeller, Korbinian; Marksteiner, Josef

    2018-04-01

    Adverse effects of heavy drinking on cognition have frequently been reported. In the present study, we systematically examined for the first time whether clinical neuropsychological assessments may be sensitive to alcohol abuse in elderly patients with suspected minor neurocognitive disorder. A total of 144 elderly with and without alcohol abuse (each group n=72; mean age 66.7 years) were selected from a patient pool of n=738 by applying propensity score matching (a statistical method allowing to match participants in experimental and control group by balancing various covariates to reduce selection bias). Accordingly, study groups were almost perfectly matched regarding age, education, gender, and Mini Mental State Examination score. Neuropsychological performance was measured using the CERAD (Consortium to Establish a Registry for Alzheimer's Disease). Classification analyses (i.e., decision tree and boosted trees models) were conducted to examine whether CERAD variables or total score contributed to group classification. Decision tree models disclosed that groups could be reliably classified based on the CERAD variables "Word List Discriminability" (tapping verbal recognition memory, 64% classification accuracy) and "Trail Making Test A" (measuring visuo-motor speed, 59% classification accuracy). Boosted tree analyses further indicated the sensitivity of "Word List Recall" (measuring free verbal recall) for discriminating elderly with versus without a history of alcohol abuse. This indicates that specific CERAD variables seem to be sensitive to alcohol-related cognitive dysfunctions in elderly patients with suspected minor neurocognitive disorder. (JINS, 2018, 24, 360-371).

  10. Aeroelastic optimization methodology for viscous and turbulent flows

    NASA Astrophysics Data System (ADS)

    Barcelos Junior, Manuel Nascimento Dias

    2007-12-01

    In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.

  11. Methodological issues in assessing changes in costs pre- and post-medication switch: a schizophrenia study example

    PubMed Central

    Faries, Douglas E; Nyhuis, Allen W; Ascher-Svanum, Haya

    2009-01-01

    Background Schizophrenia is a severe, chronic, and costly illness that adversely impacts patients' lives and health care payer budgets. Cost comparisons of treatment regimens are, therefore, important to health care payers and researchers. Pre-Post analyses ("mirror-image"), where outcomes prior to a medication switch are compared to outcomes post-switch, are commonly used in such research. However, medication changes often occur during a costly crisis event. Patients may relapse, be hospitalized, have a medication change, and then spend a period of time with intense use of costly resources (post-medication switch). While many advantages and disadvantages of Pre-Post methodology have been discussed, issues regarding the attributability of costs incurred around the time of medication switching have not been fully investigated. Methods Medical resource use data, including medications and acute-care services (hospitalizations, partial hospitalizations, emergency department) were collected for patients with schizophrenia who switched antipsychotics (n = 105) during a 1-year randomized, naturalistic, antipsychotic cost-effectiveness schizophrenia trial. Within-patient changes in total costs per day were computed during the pre- and post-medication change periods. In addition to the standard Pre-Post analysis comparing costs pre- and post-medication change, we investigated the sensitivity of results to varying assumptions regarding the attributability of acute care service costs occurring just after a medication switch that were likely due to initial medication failure. Results Fifty-six percent of all costs incurred during the first week on the newly initiated antipsychotic were likely due to treatment failure with the previous antipsychotic. Standard analyses suggested an average increase in cost-per-day for each patient of $2.40 after switching medications. However, sensitivity analyses removing costs incurred post-switch that were potentially due to the failure of the initial medication suggested decreases in costs in the range of $4.77 to $9.69 per day post-switch. Conclusion Pre-Post cost analyses are sensitive to the approach used to handle acute-service costs occurring just after a medication change. Given the importance of quality economic research on the cost of switching treatments, thorough sensitivity analyses should be performed to identify the impact of crisis events around the time of medication change. PMID:19473545

  12. Transportation systems analyses: Volume 1: Executive Summary

    NASA Astrophysics Data System (ADS)

    1993-05-01

    The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This executive summary of the transportation systems analyses (TSM) semi-annual report addresses the SSF logistics resupply. Our analysis parallels the ongoing NASA SSF redesign effort. Therefore, there could be no SSF design to drive our logistics analysis. Consequently, the analysis attempted to bound the reasonable SSF design possibilities (and the subsequent transportation implications). No other strategy really exists until after a final decision is rendered on the SSF configuration.

  13. A multi-model assessment of terrestrial biosphere model data needs

    NASA Astrophysics Data System (ADS)

    Gardella, A.; Cowdery, E.; De Kauwe, M. G.; Desai, A. R.; Duveneck, M.; Fer, I.; Fisher, R.; Knox, R. G.; Kooper, R.; LeBauer, D.; McCabe, T.; Minunno, F.; Raiho, A.; Serbin, S.; Shiklomanov, A. N.; Thomas, A.; Walker, A.; Dietze, M.

    2017-12-01

    Terrestrial biosphere models provide us with the means to simulate the impacts of climate change and their uncertainties. Going beyond direct observation and experimentation, models synthesize our current understanding of ecosystem processes and can give us insight on data needed to constrain model parameters. In previous work, we leveraged the Predictive Ecosystem Analyzer (PEcAn) to assess the contribution of different parameters to the uncertainty of the Ecosystem Demography model v2 (ED) model outputs across various North American biomes (Dietze et al., JGR-G, 2014). While this analysis identified key research priorities, the extent to which these priorities were model- and/or biome-specific was unclear. Furthermore, because the analysis only studied one model, we were unable to comment on the effect of variability in model structure to overall predictive uncertainty. Here, we expand this analysis to all biomes globally and a wide sample of models that vary in complexity: BioCro, CABLE, CLM, DALEC, ED2, FATES, G'DAY, JULES, LANDIS, LINKAGES, LPJ-GUESS, MAESPA, PRELES, SDGVM, SIPNET, and TEM. Prior to performing uncertainty analyses, model parameter uncertainties were assessed by assimilating all available trait data from the combination of the BETYdb and TRY trait databases, using an updated multivariate version of PEcAn's Hierarchical Bayesian meta-analysis. Next, sensitivity analyses were performed for all models across a range of sites globally to assess sensitivities for a range of different outputs (GPP, ET, SH, Ra, NPP, Rh, NEE, LAI) at multiple time scales from the sub-annual to the decadal. Finally, parameter uncertainties and model sensitivities were combined to evaluate the fractional contribution of each parameter to the predictive uncertainty for a specific variable at a specific site and timescale. Facilitated by PEcAn's automated workflows, this analysis represents the broadest assessment of the sensitivities and uncertainties in terrestrial models to date, and provides a comprehensive roadmap for constraining model uncertainties through model development and data collection.

  14. Cell size and wall dimensions drive distinct variability of earlywood and latewood density in Northern Hemisphere conifers.

    PubMed

    Björklund, Jesper; Seftigen, Kristina; Schweingruber, Fritz; Fonti, Patrick; von Arx, Georg; Bryukhanova, Marina V; Cuny, Henri E; Carrer, Marco; Castagneri, Daniele; Frank, David C

    2017-11-01

    Interannual variability of wood density - an important plant functional trait and environmental proxy - in conifers is poorly understood. We therefore explored the anatomical basis of density. We hypothesized that earlywood density is determined by tracheid size and latewood density by wall dimensions, reflecting their different functional tasks. To determine general patterns of variability, density parameters from 27 species and 349 sites across the Northern Hemisphere were correlated to tree-ring width parameters and local climate. We performed the same analyses with density and width derived from anatomical data comprising two species and eight sites. The contributions of tracheid size and wall dimensions to density were disentangled with sensitivity analyses. Notably, correlations between density and width shifted from negative to positive moving from earlywood to latewood. Temperature responses of density varied intraseasonally in strength and sign. The sensitivity analyses revealed tracheid size as the main determinant of earlywood density, while wall dimensions become more influential for latewood density. Our novel approach of integrating detailed anatomical data with large-scale tree-ring data allowed us to contribute to an improved understanding of interannual variations of conifer growth and to illustrate how conifers balance investments in the competing xylem functions of hydraulics and mechanical support. © 2017 The Authors. New Phytologist © 2017 New Phytologist Trust.

  15. Sensitivity Analysis for Coupled Aero-structural Systems

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.

    1999-01-01

    A novel method has been developed for calculating gradients of aerodynamic force and moment coefficients for an aeroelastic aircraft model. This method uses the Global Sensitivity Equations (GSE) to account for the aero-structural coupling, and a reduced-order modal analysis approach to condense the coupling bandwidth between the aerodynamic and structural models. Parallel computing is applied to reduce the computational expense of the numerous high fidelity aerodynamic analyses needed for the coupled aero-structural system. Good agreement is obtained between aerodynamic force and moment gradients computed with the GSE/modal analysis approach and the same quantities computed using brute-force, computationally expensive, finite difference approximations. A comparison between the computational expense of the GSE/modal analysis method and a pure finite difference approach is presented. These results show that the GSE/modal analysis approach is the more computationally efficient technique if sensitivity analysis is to be performed for two or more aircraft design parameters.

  16. Geometrical analysis of an optical fiber bundle displacement sensor

    NASA Astrophysics Data System (ADS)

    Shimamoto, Atsushi; Tanaka, Kohichi

    1996-12-01

    The performance of a multifiber optical lever was geometrically analyzed by extending the Cook and Hamm model [Appl. Opt. 34, 5854-5860 (1995)] for a basic seven-fiber optical lever. The generalized relationships between sensitivity and the displacement detection limit to the fiber core radius, illumination irradiance, and coupling angle were obtained by analyses of three various types of light source, i.e., a parallel beam light source, an infinite plane light source, and a point light source. The analysis of the point light source was confirmed by a measurement that used the light source of a light-emitting diode. The sensitivity of the fiber-optic lever is inversely proportional to the fiber core radius, whereas the receiving light power is proportional to the number of illuminating and receiving fibers. Thus, the bundling of the finer fiber with the larger number of illuminating and receiving fibers is more effective for improving sensitivity and the displacement detection limit.

  17. Sensitivity derivatives for advanced CFD algorithm and viscous modelling parameters via automatic differentiation

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.; Haigler, Kara J.

    1993-01-01

    The computational technique of automatic differentiation (AD) is applied to a three-dimensional thin-layer Navier-Stokes multigrid flow solver to assess the feasibility and computational impact of obtaining exact sensitivity derivatives typical of those needed for sensitivity analyses. Calculations are performed for an ONERA M6 wing in transonic flow with both the Baldwin-Lomax and Johnson-King turbulence models. The wing lift, drag, and pitching moment coefficients are differentiated with respect to two different groups of input parameters. The first group consists of the second- and fourth-order damping coefficients of the computational algorithm, whereas the second group consists of two parameters in the viscous turbulent flow physics modelling. Results obtained via AD are compared, for both accuracy and computational efficiency with the results obtained with divided differences (DD). The AD results are accurate, extremely simple to obtain, and show significant computational advantage over those obtained by DD for some cases.

  18. Rapid and Sensitive Detection of Lymphocystis Disease Virus Genotype VII by Loop-Mediated Isothermal Amplification.

    PubMed

    Valverde, Estefanía J; Cano, Irene; Castro, Dolores; Paley, Richard K; Borrego, Juan J

    2017-03-01

    Lymphocystis disease virus (LCDV) infections have been described in gilthead seabream (Sparus aurata L.) and Senegalese sole (Solea senegalensis, Kaup), two of the most important marine fish species in the Mediterranean aquaculture. In this study, a rapid, specific, and sensitive detection method for LCDV genotype VII based on loop-mediated isothermal amplification (LAMP) was developed. The LAMP assay, performed using an apparatus with real-time amplification monitoring, was able to specifically detect LCDV genotype VII from clinically positive samples in less than 12 min. In addition, the assay allowed the detection of LCDV in all asymptomatic carrier fish analysed, identified by qPCR, showing an analytical sensitivity of ten copies of viral DNA per reaction. The LCDV LAMP assay has proven to be a promising diagnostic method that can be used easily in fish farms to detect the presence and spread of this iridovirus.

  19. Behavior, sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation.

    PubMed

    Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R

    2016-08-15

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Behavior, Sensitivity, and power of activation likelihood estimation characterized by massive empirical simulation

    PubMed Central

    Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.

    2016-01-01

    Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606

  1. An optimal search filter for retrieving systematic reviews and meta-analyses

    PubMed Central

    2012-01-01

    Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835

  2. The countermovement jump to monitor neuromuscular status: A meta-analysis.

    PubMed

    Claudino, João Gustavo; Cronin, John; Mezêncio, Bruno; McMaster, Daniel Travis; McGuigan, Michael; Tricoli, Valmor; Amadio, Alberto Carlos; Serrão, Julio Cerca

    2017-04-01

    The primary objective of this meta-analysis was to compare countermovement jump (CMJ) performance in studies that reported the highest value as opposed to average value for the purposes of monitoring neuromuscular status (i.e., fatigue and supercompensation). The secondary aim was to determine the sensitivity of the dependent variables. Systematic review with meta-analysis. The meta-analysis was conducted on the highest or average of a number of CMJ variables. Multiple literature searches were undertaken in Pubmed, Scopus, and Web of Science to identify articles utilizing CMJ to monitor training status. Effect sizes (ES) with 95% confidence interval (95% CI) were calculated using the mean and standard deviation of the pre- and post-testing data. The coefficient of variation (CV) with 95% CI was also calculated to assess the level of instability of each variable. Heterogeneity was assessed using a random-effects model. 151 articles were included providing a total of 531 ESs for the meta-analyses; 85.4% of articles used highest CMJ height, 13.2% used average and 1.3% used both when reporting changes in CMJ performance. Based on the meta-analysis, average CMJ height was more sensitive than highest CMJ height in detecting CMJ fatigue and supercompensation. Furthermore, other CMJ variables such as peak power, mean power, peak velocity, peak force, mean impulse, and power were sensitive in tracking the supercompensation effects of training. The average CMJ height was more sensitive than highest CMJ height in monitoring neuromuscular status; however, further investigation is needed to determine the sensitivity of other CMJ performance variables. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Nutritional therapy in cirrhosis or alcoholic hepatitis: a systematic review and meta-analysis.

    PubMed

    Fialla, Annette D; Israelsen, Mads; Hamberg, Ole; Krag, Aleksander; Gluud, Lise Lotte

    2015-09-01

    Patients with cirrhosis and alcoholic hepatitis are often malnourished and have a superimposed stress metabolism, which increases nutritional demands. We performed a systematic review on the effects of nutritional therapy vs. no intervention for patients with cirrhosis or alcoholic hepatitis. We included trials on nutritional therapy designed to fulfil at least 75% of daily nutritional demand. Authors extracted data in an independent manner. Random-effects and fixed-effect meta-analyses were performed and the results expressed as risk ratios (RR) with 95% confidence intervals (CI). Sequential analyses were performed to evaluate the risk of spurious findings because of random and systematic errors. Subgroup and sensitivity analyses were performed to evaluate the risk of bias and sources of between trial heterogeneity. Thirteen randomized controlled trials with 329 allocated to enteral (nine trials) or intravenous (four trials) nutrition and 334 controls. All trials were classed as having a high risk of bias. Random-effects meta-analysis showed that nutritional therapy reduced mortality 0.80 (95% CI, 0.64 to 0.99). The result was not confirmed in sequential analysis. Fixed-effect analysis suggested that nutrition prevented overt hepatic encephalopathy (0.73; 95% CI, 0.55 to 0.96) and infection (0.66; 95% CI, 0.45 to 0.98, respectively), but the results were not confirmed in random-effects analyses. Our review suggests that nutritional therapy may have beneficial effects on clinical outcomes in cirrhosis and alcoholic hepatitis. High-quality trials are needed to verify our findings. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Cost-effectiveness of left ventricular assist devices for patients with end-stage heart failure: analysis of the French hospital discharge database.

    PubMed

    Tadmouri, Abir; Blomkvist, Josefin; Landais, Cécile; Seymour, Jerome; Azmoun, Alexandre

    2018-02-01

    Although left ventricular assist devices (LVADs) are currently approved for coverage and reimbursement in France, no French cost-effectiveness (CE) data are available to support this decision. This study aimed at estimating the CE of LVAD compared with medical management in the French health system. Individual patient data from the 'French hospital discharge database' (Medicalization of information systems program) were analysed using Kaplan-Meier method. Outcomes were time to death, time to heart transplantation (HTx), and time to death after HTx. A micro-costing method was used to calculate the monthly costs extracted from the Program for the Medicalization of Information Systems. A multistate Markov monthly cycle model was developed to assess CE. The analysis over a lifetime horizon was performed from the perspective of the French healthcare payer; discount rates were 4%. Probabilistic and deterministic sensitivity analyses were performed. Outcomes were quality-adjusted life years (QALYs) and incremental CE ratio (ICER). Mean QALY for an LVAD patient was 1.5 at a lifetime cost of €190 739, delivering a probabilistic ICER of €125 580/QALY [95% confidence interval: 105 587 to 150 314]. The sensitivity analysis showed that the ICER was mainly sensitive to two factors: (i) the high acquisition cost of the device and (ii) the device performance in terms of patient survival. Our economic evaluation showed that the use of LVAD in patients with end-stage heart failure yields greater benefit in terms of survival than medical management at an extra lifetime cost exceeding the €100 000/QALY. Technological advances and device costs reduction shall hence lead to an improvement in overall CE. © 2017 The Authors. ESC Heart Failure published by John Wiley & Sons Ltd on behalf of the European Society of Cardiology.

  5. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    PubMed

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  6. Using Uncertainty and Sensitivity Analyses in Socioecological Agent-Based Models to Improve Their Analytical Performance and Policy Relevance

    PubMed Central

    Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764

  7. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks.

    PubMed

    Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.

  8. A novel on-line spatial-temporal k-anonymity method for location privacy protection from sequence rules-based inference attacks

    PubMed Central

    Wu, Chenxue; Liu, Zhao; Zhu, Yunhong

    2017-01-01

    Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687

  9. Longitudinal study of factors affecting taste sense decline in old-old individuals.

    PubMed

    Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y

    2017-01-01

    The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.

  10. Familial covariation of facial emotion recognition and IQ in schizophrenia.

    PubMed

    Andric, Sanja; Maric, Nadja P; Mihaljevic, Marina; Mirjanic, Tijana; van Os, Jim

    2016-12-30

    Alterations in general intellectual ability and social cognition in schizophrenia are core features of the disorder, evident at the illness' onset and persistent throughout its course. However, previous studies examining cognitive alterations in siblings discordant for schizophrenia yielded inconsistent results. Present study aimed to investigate the nature of the association between facial emotion recognition and general IQ by applying genetically sensitive cross-trait cross-sibling design. Participants (total n=158; patients, unaffected siblings, controls) were assessed using the Benton Facial Recognition Test, the Degraded Facial Affect Recognition Task (DFAR) and the Wechsler Adult Intelligence Scale-III. Patients had lower IQ and altered facial emotion recognition in comparison to other groups. Healthy siblings and controls did not significantly differ in IQ and DFAR performance, but siblings exhibited intermediate angry facial expression recognition. Cross-trait within-subject analyses showed significant associations between overall DFAR performance and IQ in all participants. Within-trait cross-sibling analyses found significant associations between patients' and siblings' IQ and overall DFAR performance, suggesting their familial clustering. Finally, cross-trait cross-sibling analyses revealed familial covariation of facial emotion recognition and IQ in siblings discordant for schizophrenia, further indicating their familial etiology. Both traits are important phenotypes for genetic studies and potential early clinical markers of schizophrenia-spectrum disorders. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  11. Montreal Cognitive Assessment for screening mild cognitive impairment: variations in test performance and scores by education in Singapore.

    PubMed

    Ng, Tze Pin; Feng, Lei; Lim, Wee Shiong; Chong, Mei Sian; Lee, Tih Shih; Yap, Keng Bee; Tsoi, Tung; Liew, Tau Ming; Gao, Qi; Collinson, Simon; Kandiah, Nagaendran; Yap, Philip

    2015-01-01

    The Montreal Cognitive Assessment (MoCA) was developed as a screening instrument for mild cognitive impairment (MCI). We evaluated the MoCA's test performance by educational groups among older Singaporean Chinese adults. The MoCA and Mini-Mental State Examination (MMSE) were evaluated in two independent studies (clinic-based sample and community-based sample) of MCI and normal cognition (NC) controls, using receiver operating characteristic curve analyses: area under the curve (AUC), sensitivity (Sn), and specificity (Sp). The MoCA modestly discriminated MCI from NC in both study samples (AUC = 0.63 and 0.65): Sn = 0.64 and Sp = 0.36 at a cut-off of 28/29 in the clinic-based sample, and Sn = 0.65 and Sp = 0.55 at a cut-off of 22/23 in the community-based sample. The MoCA's test performance was least satisfactory in the highest (>6 years) education group: AUC = 0.50 (p = 0.98), Sn = 0.54, and Sp = 0.51 at a cut-off of 27/28. Overall, the MoCA's test performance was not better than that of the MMSE. In multivariate analyses controlling for age and gender, MCI diagnosis was associated with a <1-point decrement in MoCA score (η(2) = 0.010), but lower (1-6 years) and no education was associated with a 3- to 5-point decrement (η(2) = 0.115 and η(2) = 0.162, respectively). The MoCA's ability to discriminate MCI from NC was modest in this Chinese population, because it was far more sensitive to the effect of education than MCI diagnosis. © 2015 S. Karger AG, Basel.

  12. Is functional MR imaging assessment of hemispheric language dominance as good as the Wada test?: a meta-analysis.

    PubMed

    Dym, R Joshua; Burns, Judah; Freeman, Katherine; Lipton, Michael L

    2011-11-01

    To perform a systematic review and meta-analysis to quantitatively assess functional magnetic resonance (MR) imaging lateralization of language function in comparison with the Wada test. This study was determined to be exempt from review by the institutional review board. A systematic review and meta-analysis were performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. A structured Medline search was conducted to identify all studies that compared functional MR imaging with the Wada test for determining hemispheric language dominance prior to brain surgery. Studies meeting predetermined inclusion criteria were selected independently by two radiologists who also assessed their quality using the Quality Assessment of Diagnostic Accuracy Studies tool. Language dominance was classified as typical (left hemispheric language dominance) or atypical (right hemispheric language dominance or bilateral language representation) for each patient. A meta-analysis was then performed by using a bivariate random-effects model to derive estimates of sensitivity and specificity, with Wada as the standard of reference. Subgroup analyses were also performed to compare the different functional MR imaging techniques utilized by the studies. Twenty-three studies, comprising 442 patients, met inclusion criteria. The sensitivity and specificity of functional MR imaging for atypical language dominance (compared with the Wada test) were 83.5% (95% confidence interval: 80.2%, 86.7%) and 88.1% (95% confidence interval: 87.0%, 89.2%), respectively. Functional MR imaging provides an excellent, noninvasive alternative for language lateralization and should be considered for the initial preoperative assessment of hemispheric language dominance. Further research may help determine which functional MR methods are most accurate for specific patient populations. RSNA, 2011

  13. Diagnostic Accuracy of Computer Tomography Angiography and Magnetic Resonance Angiography in the Stenosis Detection of Autologuous Hemodialysis Access: A Meta-Analysis

    PubMed Central

    Liu, Shiyuan

    2013-01-01

    Purpose To compare the diagnostic performances of computer tomography angiography (CTA) and magnetic resonance angiography (MRA) for detection and assessment of stenosis in patients with autologuous hemodialysis access. Materials and Methods Search of PubMed, MEDLINE, EMBASE and Cochrane Library database from January 1984 to May 2013 for studies comparing CTA or MRA with DSA or surgery for autologuous hemodialysis access. Eligible studies were in English language, aimed to detect more than 50% stenosis or occlusion of autologuous vascular access in hemodialysis patients with CTA and MRA technology and provided sufficient data about diagnosis performance. Methodological quality was assessed by the Quality Assessment of Diagnostic Studies (QUADAS) instrument. Sensitivities (SEN), specificities (SPE), positive likelihood ratio (PLR), negative likelihood values (NLR), diagnostic odds ratio (DOR) and areas under the receiver operator characteristic curve (AUC) were pooled statistically. Potential threshold effect, heterogeneity and publication bias was evaluated. The clinical utility of CTA and MRA in detection of stenosis was also investigated. Result Sixteen eligible studies were included, with a total of 500 patients. Both CTA and MRA were accurate modality (sensitivity, 96.2% and 95.4%, respectively; specificity, 97.1 and 96.1%, respectively; DOR [diagnostic odds ratio], 393.69 and 211.47, respectively) for hemodialysis vascular access. No significant difference was detected between the diagnostic performance of CTA (AUC, 0.988) and MRA (AUC, 0.982). Meta-regression analyses and subgroup analyses revealed no statistical difference. The Deek’s funnel plots suggested a publication bias. Conclusion Diagnostic performance of CTA and MRA for detecting stenosis of hemodialysis vascular access had no statistical difference. Both techniques may function as an alternative or an important complement to conventional digital subtraction angiography (DSA) and may be able to help guide medical management. PMID:24194928

  14. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-05-01

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.

  15. Global sensitivity analysis in stochastic simulators of uncertain reaction networks.

    PubMed

    Navarro Jimenez, M; Le Maître, O P; Knio, O M

    2016-12-28

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  16. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    DOE PAGES

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-23

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less

  17. Global sensitivity analysis in stochastic simulators of uncertain reaction networks

    NASA Astrophysics Data System (ADS)

    Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.

    2016-12-01

    Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.

  18. Cost-effectiveness analysis in the Spanish setting of the PEAK trial of panitumumab plus mFOLFOX6 compared with bevacizumab plus mFOLFOX6 for first-line treatment of patients with wild-type RAS metastatic colorectal cancer.

    PubMed

    Rivera, Fernando; Valladares, Manuel; Gea, Salvador; López-Martínez, Noemí

    2017-06-01

    To assess the cost-effectiveness of panitumumab in combination with mFOLFOX6 (oxaliplatin, 5-fluorouracil, and leucovorin) vs bevacizumab in combination with mFOLFOX6 as first-line treatment of patients with wild-type RAS metastatic colorectal cancer (mCRC) in Spain. A semi-Markov model was developed including the following health states: Progression free; Progressive disease: Treat with best supportive care; Progressive disease: Treat with subsequent active therapy; Attempted resection of metastases; Disease free after metastases resection; Progressive disease: after resection and relapse; and Death. Parametric survival analyses of patient-level progression free survival and overall survival data from the PEAK Phase II clinical trial were used to estimate health state transitions. Additional data from the PEAK trial were considered for the dose and duration of therapy, the use of subsequent therapy, the occurrence of adverse events, and the incidence and probability of time to metastasis resection. Utility weightings were calculated from patient-level data from panitumumab trials evaluating first-, second-, and third-line treatments. The study was performed from the Spanish National Health System (NHS) perspective including only direct costs. A life-time horizon was applied. Probabilistic sensitivity analyses and scenario sensitivity analyses were performed to assess the robustness of the model. Based on the PEAK trial, which demonstrated greater efficacy of panitumumab vs bevacizumab, both in combination with mFOLFOX6 first-line in wild-type RAS mCRC patients, the estimated incremental cost per life-year gained was €16,567 and the estimated incremental cost per quality-adjusted life year gained was €22,794. The sensitivity analyses showed the model was robust to alternative parameters and assumptions. The analysis was based on a simulation model and, therefore, the results should be interpreted cautiously. Based on the PEAK Phase II clinical trial and taking into account Spanish costs, the results of the analysis showed that first-line treatment of mCRC with panitumumab + mFOLFOX6 could be considered a cost-effective option compared with bevacizumab + mFOLFOX6 for the Spanish NHS.

  19. Sensitization to reactive diluents and hardeners in epoxy resin systems. IVDK data 2002-2011. Part I: reaction frequencies.

    PubMed

    Geier, Johannes; Lessmann, Holger; Hillen, Uwe; Skudlik, Christoph; Jappe, Uta

    2016-02-01

    Epoxy resin systems (ERSs), consisting of resins, reactive diluents, and hardeners, are indispensable in many branches of industry. In order to develop less sensitizing ERS formulations, knowledge of the sensitizing properties of single components is mandatory. To analyse the frequency of sensitization in the patients concerned, as one integral part of a research project on the sensitizing potency of epoxy resin compounds (FP-0324). A retrospective analysis of data from the Information Network of Departments of Dermatology (IVDK), 2002-2011, and a comparison of reaction frequencies with (surrogate) exposure data, were performed. Almost half of the patients sensitized to epoxy resin were additionally sensitized to reactive diluents or hardeners. Among the reactive diluents, 1,6-hexanediol diglycidyl ether was the most frequent allergen, followed by 1,4-butanediol diglycidyl ether, phenyl glycidyl ether, and p-tert-butylphenyl glycidyl ether. Among the hardeners, m-xylylene diamine (MXDA) and isophorone diamine (IPDA) were the most frequent allergens. According to the calculated exposure-related frequency of sensitization, MXDA seems to be a far more important sensitizer than IPDA. Up to 60% of the patients sensitized to hardeners and 15-20% of those sensitized to reactive diluents do not react to epoxy resin. In cases of suspected contact allergy to an ERS, a complete epoxy resin series must be patch tested from the start. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. Diagnostic Accuracy of Abdominal Ultrasound for Diagnosis of Acute Appendicitis: Systematic Review and Meta-analysis.

    PubMed

    Giljaca, Vanja; Nadarevic, Tin; Poropat, Goran; Nadarevic, Vesna Stefanac; Stimac, Davor

    2017-03-01

    To determine the diagnostic accuracy of abdominal ultrasound (US) for the diagnosis of acute appendicitis (AA), in terms of sensitivity, specificity and post-test probabilities for positive and negative result. A systematic search of MEDLINE, Embase, The Cochrane library and Science Citation Index Expanded from January 1994 to October 2014 was performed. Two authors independently evaluated studies for inclusion, extracted data and performed analyses. The reference standard for evaluation of final diagnosis was pathohistological report on tissue obtained at appendectomy. Summary sensitivity, specificity and post-test probability of AA after positive and negative result of US with corresponding 95% confidence intervals (CI) were calculated. Out of 3306 references identified through electronic searches, 17 reports met the inclusion criteria, with 2841 included participants. The summary sensitivity and specificity of US for diagnosis of AA were 69% (95% CI 59-78%) and 81% (95% CI 73-88%), respectively. At the median pretest probability of AA of 76.4%, the post-test probability for a positive and negative result of US was 92% (95% CI 88-95%) and 55% (95% CI 46-63%), respectively. Abdominal ultrasound does not seem to have a role in the diagnostic pathway for diagnosis of AA in suspected patients. The summary sensitivity and specificity of US do not exceed that of physical examination. Patients that require additional diagnostic workup should be referred to more sensitive and specific diagnostic procedures, such as computed tomography.

  1. Cognitive Screening in Brain Tumors: Short but Sensitive Enough?

    PubMed Central

    Robinson, Gail A.; Biggs, Vivien; Walker, David G.

    2015-01-01

    Cognitive deficits in brain tumors are generally thought to be relatively mild and non-specific, although recent evidence challenges this notion. One possibility is that cognitive screening tools are being used to assess cognitive functions but their sensitivity to detect cognitive impairment may be limited. For improved sensitivity to recognize mild and/or focal cognitive deficits in brain tumors, neuropsychological evaluation tailored to detect specific impairments has been thought crucial. This study investigates the sensitivity of a cognitive screening tool, the Montreal Cognitive Assessment (MoCA), compared to a brief but tailored cognitive assessment (CA) for identifying cognitive deficits in an unselected primary brain tumor sample (i.e., low/high-grade gliomas, meningiomas). Performance is compared on broad measures of impairment: (a) number of patients impaired on the global screening measure or in any cognitive domain; and (b) number of cognitive domains impaired and specific analyses of MoCA-Intact and MoCA-Impaired patients on specific cognitive tests. The MoCA-Impaired group obtained lower naming and word fluency scores than the MoCA-Intact group, but otherwise performed comparably on cognitive tests. Overall, based on our results from patients with brain tumor, the MoCA has extremely poor sensitivity for detecting cognitive impairments and a brief but tailored CA is necessary. These findings will be discussed in relation to broader issues for clinical management and planning, as well as specific considerations for neuropsychological assessment of brain tumor patients. PMID:25815273

  2. Space station electrical power system availability study

    NASA Technical Reports Server (NTRS)

    Turnquist, Scott R.; Twombly, Mark A.

    1988-01-01

    ARINC Research Corporation performed a preliminary reliability, and maintainability (RAM) anlaysis of the NASA space station Electric Power Station (EPS). The analysis was performed using the ARINC Research developed UNIRAM RAM assessment methodology and software program. The analysis was performed in two phases: EPS modeling and EPS RAM assessment. The EPS was modeled in four parts: the insolar power generation system, the eclipse power generation system, the power management and distribution system (both ring and radial power distribution control unit (PDCU) architectures), and the power distribution to the inner keel PDCUs. The EPS RAM assessment was conducted in five steps: the use of UNIRAM to perform baseline EPS model analyses and to determine the orbital replacement unit (ORU) criticalities; the determination of EPS sensitivity to on-orbit spared of ORUs and the provision of an indication of which ORUs may need to be spared on-orbit; the determination of EPS sensitivity to changes in ORU reliability; the determination of the expected annual number of ORU failures; and the integration of the power generator system model results with the distribution system model results to assess the full EPS. Conclusions were drawn and recommendations were made.

  3. The Prognostic Value of the Work Ability Index for Sickness Absence among Office Workers.

    PubMed

    Reeuwijk, Kerstin G; Robroek, Suzan J W; Niessen, Maurice A J; Kraaijenhagen, Roderik A; Vergouwe, Yvonne; Burdorf, Alex

    2015-01-01

    The work ability index (WAI) is a frequently used tool in occupational health to identify workers at risk for a reduced work performance and for work-related disability. However, information about the prognostic value of the WAI to identify workers at risk for sickness absence is scarce. To investigate the prognostic value of the WAI for sickness absence, and whether the discriminative ability differs across demographic subgroups. At baseline, the WAI (score 7-49) was assessed among 1,331 office workers from a Dutch financial service company. Sickness absence was registered during 12-months follow-up and categorised as 0 days, 0

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Nicholas R.; Powers, Jeffrey J.; Mueller, Don

    In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF 2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy researchmore » and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.« less

  5. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  6. Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion: Cost-Effectiveness Analysis.

    PubMed

    Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H

    2015-07-01

    This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.

  7. Early-life gut microbiome and egg allergy.

    PubMed

    Fazlollahi, M; Chun, Y; Grishin, A; Wood, R A; Burks, A W; Dawson, P; Jones, S M; Leung, D Y M; Sampson, H A; Sicherer, S H; Bunyavanich, S

    2018-07-01

    Gut microbiota may play a role in egg allergy. We sought to examine the association between early-life gut microbiota and egg allergy. We studied 141 children with egg allergy and controls from the multicenter Consortium of Food Allergy Research study. At enrollment (age 3 to 16 months), fecal samples were collected, and clinical evaluation, egg-specific IgE measurement, and egg skin prick test were performed. Gut microbiome was profiled by 16S rRNA sequencing. Analyses for the primary outcome of egg allergy at enrollment, and the secondary outcomes of egg sensitization at enrollment and resolution of egg allergy by age 8 years, were performed using Quantitative Insights into Microbial Ecology, Phylogenetic Investigation of Communities by Reconstruction of Unobserved States, and Statistical Analysis of Metagenomic Profiles. Compared to controls, increased alpha diversity and distinct taxa (PERMANOVA P = 5.0 × 10 -4 ) characterized the early-life gut microbiome of children with egg allergy. Genera from the Lachnospiraceae, Streptococcaceae, and Leuconostocaceae families were differentially abundant in children with egg allergy. Predicted metagenome functional analyses showed differential purine metabolism by the gut microbiota of egg-allergic subjects (Kruskal-Wallis P adj  = 0.021). Greater gut microbiome diversity and genera from Lachnospiraceae and Ruminococcaceae were associated with egg sensitization (PERMANOVA P = 5.0 × 10 -4 ). Among those with egg allergy, there was no association between early-life gut microbiota and egg allergy resolution by age 8 years. The distinct early-life gut microbiota in egg-allergic and egg-sensitized children identified by our study may point to targets for preventive or therapeutic intervention. © 2018 EAACI and John Wiley and Sons A/S. Published by John Wiley and Sons Ltd.

  8. Value of lower respiratory tract surveillance cultures to predict bacterial pathogens in ventilator-associated pneumonia: systematic review and diagnostic test accuracy meta-analysis.

    PubMed

    Brusselaers, Nele; Labeau, Sonia; Vogelaers, Dirk; Blot, Stijn

    2013-03-01

    In ventilator-associated pneumonia (VAP), early appropriate antimicrobial therapy may be hampered by involvement of multidrug-resistant (MDR) pathogens. A systematic review and diagnostic test accuracy meta-analysis were performed to analyse whether lower respiratory tract surveillance cultures accurately predict the causative pathogens of subsequent VAP in adult patients. Selection and assessment of eligibility were performed by three investigators by mutual consideration. Of the 525 studies retrieved, 14 were eligible for inclusion (all in English; published since 1994), accounting for 791 VAP episodes. The following data were collected: study and population characteristics; in- and exclusion criteria; diagnostic criteria for VAP; microbiological workup of surveillance and diagnostic VAP cultures. Sub-analyses were conducted for VAP caused by Staphylococcus aureus, Pseudomonas spp., and Acinetobacter spp., MDR microorganisms, frequency of sampling, and consideration of all versus the most recent surveillance cultures. The meta-analysis showed a high accuracy of surveillance cultures, with pooled sensitivities up to 0.75 and specificities up to 0.92 in culture-positive VAP. The area under the curve (AUC) of the hierarchical summary receiver-operating characteristic curve demonstrates moderate accuracy (AUC: 0.90) in predicting multidrug resistance. A sampling frequency of >2/week (sensitivity 0.79; specificity 0.96) and consideration of only the most recent surveillance culture (sensitivity 0.78; specificity 0.96) are associated with a higher accuracy of prediction. This study provides evidence for the benefit of surveillance cultures in predicting MDR bacterial pathogens in VAP. However, clinical and statistical heterogeneity, limited samples sizes, and bias remain important limitations of this meta-analysis.

  9. Use of drug-eluting stents versus bare-metal stents in Korea: a cost-minimization analysis using population data.

    PubMed

    Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo

    2013-07-01

    The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.

  10. The peripheral artery questionnaire: a new disease-specific health status measure for patients with peripheral arterial disease.

    PubMed

    Spertus, John; Jones, Philip; Poler, Sherri; Rocha-Singh, Krishna

    2004-02-01

    The most common indication for treating patients with peripheral arterial disease is to improve their health status: their symptoms, function, and quality of life. Quantifying health status requires a valid, reproducible, and sensitive disease-specific measure. The Peripheral Artery Questionnaire (PAQ) is a 20-item questionnaire developed to meet this need by quantifying patients' physical limitations, symptoms, social function, treatment satisfaction, and quality of life. Psychometric and clinical properties of the PAQ were evaluated in a prospective cohort study of 44 patients undergoing elective percutaneous peripheral revascularization. To establish reproducibility, 2 assessments were performed 2 weeks apart and before revascularization. The change in scores before and 6 weeks after revascularization were used to determine the instruments' responsiveness and were compared with the Short Form-36 and the Walking Impairment Questionnaire. A series of cross-sectional analyses were performed to establish the construct validity of the PAQ. The 7 domains of the PAQ were internally reliable, with Cronbach alpha = 0.80 to 0.94. The test-retest reliability analyses revealed insignificant mean changes of 0.6 to 2.3 points (P = not significant for all). Conversely, the change after revascularization ranged from 13.7 to 41.9 points (P < or =.001 for all), reflecting substantial sensitivity of the PAQ to clinical improvement. The PAQ Summary Scale was the most sensitive of all scales tested. Construct validity was established by demonstrating correlations with other measures of patient health status. The PAQ is a valid, reliable, and responsive disease-specific measure for patients with peripheral arterial disease. It may prove to be a useful end point in clinical trials and a potential aid in disease management.

  11. Analysis of carbohydrates by anion exchange chromatography and mass spectrometry.

    PubMed

    Bruggink, Cees; Maurer, Rolf; Herrmann, Heiko; Cavalli, Silvano; Hoefler, Frank

    2005-08-26

    A versatile liquid chromatographic platform has been developed for analysing underivatized carbohydrates using high performance anion exchange chromatography (HPAEC) followed by an inert PEEK splitter that splits the effluent to the integrated pulsed amperometric detector (IPAD) and to an on-line single quadrupole mass spectrometer (MS). Common eluents for HPAEC such as sodium hydroxide and sodium acetate are beneficial for the amperometric detection but not compatible with electrospray ionisation (ESI). Therefore a membrane-desalting device was installed after the splitter and prior to the ESI interface converting sodium hydroxide into water and sodium acetate into acetic acid. To enhance the sensitivity for the MS detection, 0.5 mmol/l lithium chloride was added after the membrane desalter to form lithium adducts of the carbohydrates. To compare sensitivity of IPAD and MS detection glucose, fructose, and sucrose were used as analytes. A calibration with external standards from 2.5 to 1000 pmole was performed showing a linear range over three orders of magnitude. Minimum detection limits (MDL) with IPAD were determined at 5 pmole levels for glucose to be 0.12 pmole, fructose 0.22 pmole and sucrose 0.11 pmole. With MS detection in the selected ion mode (SIM) the lithium adducts of the carbohydrates were detected obtaining MDL's for glucose of 1.49 pmole, fructose 1.19 pmole, and sucrose 0.36 pmole showing that under these conditions IPAD is 3-10 times more sensitive for those carbohydrates. The applicability of the method was demonstrated analysing carbohydrates in real world samples such as chicory inulin where polyfructans up to a molecular mass of 7000 g/mol were detected as quadrupoly charged lithium adducts. Furthermore mono-, di-, tri-, and oligosaccharides were detected in chicory coffee, honey and beer samples.

  12. Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion

    PubMed Central

    Beyer, Sebastian E.; Hunink, Myriam G.; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E.; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F.

    2015-01-01

    Background and Purpose— This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. Methods— A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80 000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. Results— In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80 000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80 000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Conclusions— Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. PMID:26022634

  13. VBM-DTI correlates of verbal intelligence: a potential link to Broca's area.

    PubMed

    Konrad, Andreas; Vucurevic, Goran; Musso, Francesco; Winterer, Georg

    2012-04-01

    Human brain lesion studies first investigated the biological roots of cognitive functions including language in the late 1800s. Neuroimaging studies have reported correlation findings with general intelligence predominantly in fronto-parietal cortical areas. However, there is still little evidence about the relationship between verbal intelligence and structural properties of the brain. We predicted that verbal performance is related to language regions of Broca's and Wernicke's areas. Verbal intelligence quotient (vIQ) was assessed in 30 healthy young subjects. T1-weighted MRI and diffusion tensor imaging data sets were acquired. Voxel-wise regression analyses were used to correlate fractional anisotropy (FA) and mean diffusivity values with vIQ. Moreover, regression analyses of regional brain volume with vIQ were performed adopting voxel-based morphometry (VBM) and ROI methodology. Our analyses revealed a significant negative correlation between vIQ and FA and a significant positive correlation between vIQ and mean diffusivity in the left-hemispheric Broca's area. VBM regression analyses did not show significant results, whereas a subsequent ROI analysis of Broca's area FA peak cluster demonstrated a positive correlation of gray matter volume and vIQ. These findings suggest that cortical thickness in Broca's area contributes to verbal intelligence. Diffusion parameters predicted gray matter ratio in Broca's area more sensitive than VBM methodology.

  14. A simple technique investigating baseline heterogeneity helped to eliminate potential bias in meta-analyses.

    PubMed

    Hicks, Amy; Fairhurst, Caroline; Torgerson, David J

    2018-03-01

    To perform a worked example of an approach that can be used to identify and remove potentially biased trials from meta-analyses via the analysis of baseline variables. True randomisation produces treatment groups that differ only by chance; therefore, a meta-analysis of a baseline measurement should produce no overall difference and zero heterogeneity. A meta-analysis from the British Medical Journal, known to contain significant heterogeneity and imbalance in baseline age, was chosen. Meta-analyses of baseline variables were performed and trials systematically removed, starting with those with the largest t-statistic, until the I 2 measure of heterogeneity became 0%, then the outcome meta-analysis repeated with only the remaining trials as a sensitivity check. We argue that heterogeneity in a meta-analysis of baseline variables should not exist, and therefore removing trials which contribute to heterogeneity from a meta-analysis will produce a more valid result. In our example none of the overall outcomes changed when studies contributing to heterogeneity were removed. We recommend routine use of this technique, using age and a second baseline variable predictive of outcome for the particular study chosen, to help eliminate potential bias in meta-analyses. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Critical processes and parameters in the development of accident tolerant fuels drop-in capsule irradiation tests

    DOE PAGES

    Barrett, K. E.; Ellis, K. D.; Glass, C. R.; ...

    2015-12-01

    The goal of the Accident Tolerant Fuel (ATF) program is to develop the next generation of Light Water Reactor (LWR) fuels with improved performance, reliability, and safety characteristics during normal operations and accident conditions and with reduced waste generation. An irradiation test series has been defined to assess the performance of proposed ATF concepts under normal LWR operating conditions. The Phase I ATF irradiation test series is planned to be performed as a series of drop-in capsule tests to be irradiated in the Advanced Test Reactor (ATR) operated by the Idaho National Laboratory (INL). Design, analysis, and fabrication processes formore » ATR drop-in capsule experiment preparation are presented in this paper to demonstrate the importance of special design considerations, parameter sensitivity analysis, and precise fabrication and inspection techniques for figure innovative materials used in ATF experiment assemblies. A Taylor Series Method sensitivity analysis approach was used to identify the most critical variables in cladding and rodlet stress, temperature, and pressure calculations for design analyses. The results showed that internal rodlet pressure calculations are most sensitive to the fission gas release rate uncertainty while temperature calculations are most sensitive to cladding I.D. and O.D. dimensional uncertainty. The analysis showed that stress calculations are most sensitive to rodlet internal pressure uncertainties, however the results also indicated that the inside radius, outside radius, and internal pressure were all magnified as they propagate through the stress equation. This study demonstrates the importance for ATF concept development teams to provide the fabricators as much information as possible about the material properties and behavior observed in prototype testing, mock-up fabrication and assembly, and chemical and mechanical testing of the materials that may have been performed in the concept development phase. Special handling, machining, welding, and inspection of materials, if known, should also be communicated to the experiment fabrication and inspection team.« less

  16. Uncertainty and Sensitivity Analyses of a Pebble Bed HTGR Loss of Cooling Event

    DOE PAGES

    Strydom, Gerhard

    2013-01-01

    The Very High Temperature Reactor Methods Development group at the Idaho National Laboratory identified the need for a defensible and systematic uncertainty and sensitivity approach in 2009. This paper summarizes the results of an uncertainty and sensitivity quantification investigation performed with the SUSA code, utilizing the International Atomic Energy Agency CRP 5 Pebble Bed Modular Reactor benchmark and the INL code suite PEBBED-THERMIX. Eight model input parameters were selected for inclusion in this study, and after the input parameters variations and probability density functions were specified, a total of 800 steady state and depressurized loss of forced cooling (DLOFC) transientmore » PEBBED-THERMIX calculations were performed. The six data sets were statistically analyzed to determine the 5% and 95% DLOFC peak fuel temperature tolerance intervals with 95% confidence levels. It was found that the uncertainties in the decay heat and graphite thermal conductivities were the most significant contributors to the propagated DLOFC peak fuel temperature uncertainty. No significant differences were observed between the results of Simple Random Sampling (SRS) or Latin Hypercube Sampling (LHS) data sets, and use of uniform or normal input parameter distributions also did not lead to any significant differences between these data sets.« less

  17. Automated detection of diabetic retinopathy on digital fundus images.

    PubMed

    Sinthanayothin, C; Boyce, J F; Williamson, T H; Cook, H L; Mensah, E; Lal, S; Usher, D

    2002-02-01

    The aim was to develop an automated screening system to analyse digital colour retinal images for important features of non-proliferative diabetic retinopathy (NPDR). High performance pre-processing of the colour images was performed. Previously described automated image analysis systems were used to detect major landmarks of the retinal image (optic disc, blood vessels and fovea). Recursive region growing segmentation algorithms combined with the use of a new technique, termed a 'Moat Operator', were used to automatically detect features of NPDR. These features included haemorrhages and microaneurysms (HMA), which were treated as one group, and hard exudates as another group. Sensitivity and specificity data were calculated by comparison with an experienced fundoscopist. The algorithm for exudate recognition was applied to 30 retinal images of which 21 contained exudates and nine were without pathology. The sensitivity and specificity for exudate detection were 88.5% and 99.7%, respectively, when compared with the ophthalmologist. HMA were present in 14 retinal images. The algorithm achieved a sensitivity of 77.5% and specificity of 88.7% for detection of HMA. Fully automated computer algorithms were able to detect hard exudates and HMA. This paper presents encouraging results in automatic identification of important features of NPDR.

  18. Accuracy of mucocutaneous leishmaniasis diagnosis using polymerase chain reaction: systematic literature review and meta-analysis

    PubMed Central

    Gomes, Ciro Martins; Mazin, Suleimy Cristina; dos Santos, Elisa Raphael; Cesetti, Mariana Vicente; Bächtold, Guilherme Albergaria Brízida; Cordeiro, João Henrique de Freitas; Theodoro, Fabrício Claudino Estrela Terra; Damasco, Fabiana dos Santos; Carranza, Sebastián Andrés Vernal; Santos, Adriana de Oliveira; Roselino, Ana Maria; Sampaio, Raimunda Nonata Ribeiro

    2015-01-01

    The diagnosis of mucocutaneous leishmaniasis (MCL) is hampered by the absence of a gold standard. An accurate diagnosis is essential because of the high toxicity of the medications for the disease. This study aimed to assess the ability of polymerase chain reaction (PCR) to identify MCL and to compare these results with clinical research recently published by the authors. A systematic literature review based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA Statement was performed using comprehensive search criteria and communication with the authors. A meta-analysis considering the estimates of the univariate and bivariate models was performed. Specificity near 100% was common among the papers. The primary reason for accuracy differences was sensitivity. The meta-analysis, which was only possible for PCR samples of lesion fragments, revealed a sensitivity of 71% [95% confidence interval (CI) = 0.59; 0.81] and a specificity of 93% (95% CI = 0.83; 0.98) in the bivariate model. The search for measures that could increase the sensitivity of PCR should be encouraged. The quality of the collected material and the optimisation of the amplification of genetic material should be prioritised. PMID:25946238

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruan, D; Shao, W; Low, D

    Purpose: To evaluate and test the hypothesis that plan quality may be systematically affected by treatment delivery techniques and target-tocritical structure geometric relationship in radiotherapy for brain tumor. Methods: Thirty-four consecutive brain tumor patients treated between 2011–2014 were analyzed. Among this cohort, 10 were planned with 3DCRT, 11 with RadipArc, and 13 with helical IMRT on TomoTherapy. The selected dosimetric endpoints (i.e., PTV V100, maximum brainstem/chiasm/ optic nerve doses) were considered as a vector in a highdimensional space. A Pareto analysis was performed to identify the subset of Pareto-efficient plans.The geometric relationships, specifically the overlapping volume and centroid-of-mass distance betweenmore » each critical structure to the PTV were extracted as potential geometric features. The classification-tree analyses were repeated using these geometric features with and without the treatment modality as an additional categorical predictor. In both scenarios, the dominant features to prognosticate the Pareto membership were identified and the tree structures to provide optimal inference were recorded. The classification performance was further analyzed to determine the role of treatment modality in affecting plan quality. Results: Seven Pareto-efficient plans were identified based on dosimetric endpoints (3 from 3DCRT, 3 from RapicArc, 1 from Tomo), which implies that the evaluated treatment modality may have a minor influence on plan quality. Classification trees with/without the treatment modality as a predictor both achieved accuracy of 88.2%: with 100% sensitivity and 87.1% specificity for the former, and 66.7% sensitivity and 96.0% specificity for the latter. The coincidence of accuracy from both analyses further indicates no-to-weak dependence of plan quality on treatment modality. Both analyses have identified the brainstem to PTV distance as the primary predictive feature for Pareto-efficiency. Conclusion: Pareto evaluation and classification-tree analyses have indicated that plan quality depends strongly on geometry for brain tumor, specifically PTV-tobrain-stem-distance but minimally on treatment modality.« less

  20. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for detecting abrupt changes (such as failures) in stochastic dynamical systems are surveyed. The class of linear systems is concentrated on but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  1. Analyzing Feedback Control Systems

    NASA Technical Reports Server (NTRS)

    Bauer, Frank H.; Downing, John P.

    1987-01-01

    Interactive controls analysis (INCA) program developed to provide user-friendly environment for design and analysis of linear control systems, primarily feedback control. Designed for use with both small- and large-order systems. Using interactive-graphics capability, INCA user quickly plots root locus, frequency response, or time response of either continuous-time system or sampled-data system. Configuration and parameters easily changed, allowing user to design compensation networks and perform sensitivity analyses in very convenient manner. Written in Pascal and FORTRAN.

  2. Solid oxide fuel cell short stack performance testing - Part A: Experimental analysis and μ-combined heat and power unit comparison

    NASA Astrophysics Data System (ADS)

    Mastropasqua, L.; Campanari, S.; Brouwer, J.

    2017-12-01

    The need to experimentally understand the detailed performance of SOFC stacks under operating conditions typical of commercial SOFC systems has prompted this two-part study. The steady state performance of a 6-cell short stack of yttria (Y2O3) stabilised zirconia (YSZ) with Ni/YSZ anodes and composite Sr-doped lanthanum manganite (LaMnO3, LSM)/YSZ cathodes is experimentally evaluated. In Part A, the stack characterisation is carried out by means of sensitivity analyses on the fuel utilisation factor and the steam-to-carbon ratio. Electrical and environmental performances are assessed and the results are compared with a commercial full-scale micro-CHP system, which comprises the same cells. The results show that the measured temperature dynamics of the short stack in a test stand environment are on the order of many minutes; therefore, one cannot neglect temperature dynamics for a precise measurement of the steady state polarisation behaviour. The overall polarisation performance is comparable to that of the full stack employed in the micro-CHP system, confirming the good representation that short-stack analyses can give of the entire SOFC module. The environmental performance is measured verifying the negligible values of NO emissions (<10 ppb) across the whole polarisation curve.

  3. Remote sensing requirements as suggested by watershed model sensitivity analyses

    NASA Technical Reports Server (NTRS)

    Salomonson, V. V.; Rango, A.; Ormsby, J. P.; Ambaruch, R.

    1975-01-01

    A continuous simulation watershed model has been used to perform sensitivity analyses that provide guidance in defining remote sensing requirements for the monitoring of watershed features and processes. The results show that out of 26 input parameters having meaningful effects on simulated runoff, 6 appear to be obtainable with existing remote sensing techniques. Of these six parameters, 3 require the measurement of the areal extent of surface features (impervious areas, water bodies, and the extent of forested area), two require the descrimination of land use that can be related to overland flow roughness coefficient or the density of vegetation so as to estimate the magnitude of precipitation interception, and one parameter requires the measurement of distance to get the length over which overland flow typically occurs. Observational goals are also suggested for monitoring such fundamental watershed processes as precipitation, soil moisture, and evapotranspiration. A case study on the Patuxent River in Maryland shows that runoff simulation is improved if recent satellite land use observations are used as model inputs as opposed to less timely topographic map information.

  4. Development of an estimation model for the evaluation of the energy requirement of dilute acid pretreatments of biomass.

    PubMed

    Mafe, Oluwakemi A T; Davies, Scott M; Hancock, John; Du, Chenyu

    2015-01-01

    This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly.

  5. A Systematic Review of the Cost-Effectiveness of Biologics for Ulcerative Colitis.

    PubMed

    Stawowczyk, Ewa; Kawalec, Paweł

    2018-04-01

    Ulcerative colitis (UC) is a chronic autoimmune inflammation of the colon. The condition significantly decreases quality of life and generates a substantial economic burden for healthcare payers, patients and the society in which they live. Some patients require chronic pharmacotherapy, and access to novel biologic drugs might be crucial for long-term remission. The analyses of cost-effectiveness for biologic drugs are necessary to assess their efficiency and provide the best available drugs to patients. Our aim was to collect and assess the quality of economic analyses carried out for biologic agents used in the treatment of UC, as well as to summarize evidence on the drivers of cost-effectiveness and evaluate the transferability and generalizability of conclusions. A systematic database review was conducted using MEDLINE (via PubMed), EMBASE, Cost-Effectiveness Analysis Registry and CRD0. Both authors independently reviewed the identified articles to determine their eligibility for final review. Hand searching of references in collected papers was also performed to find any relevant articles. The reporting quality of economic analyses included was evaluated by two reviewers using the International Society of Pharmacoeconomics and Outcomes Research (ISPOR) Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement checklist. We reviewed the sensitivity analyses in cost-effectiveness analyses to identify the variables that may have changed the conclusions of the study. Key drivers of cost-effectiveness were selected by identifying uncertain parameters that caused the highest change of the results of the analyses compared with base-case results. Of the 576 identified records, 87 were excluded as duplicates and 16 studies were included in the final review; evaluations for Canada, the UK and Poland were mostly performed. The majority of the evaluations revealed were performed for infliximab (approximately 75% of total volume); however, some assessments were also performed for adalimumab (50%) and golimumab (31%). Only three analyses were conducted for vedolizumab, whereas no relevant studies were found for etrolizumab and tofacitinib. The reporting quality of the included economic analyses was assessed as high, with an average score of 21 points per 24 maximum possible (range 14-23 points according to the ISPOR CHEERS statement checklist). In the case of most analyses, quality-adjusted life-years were used as a clinical outcome, and endpoints such as remission, response and mucosal healing were less common. The higher clinical effectiveness (based on response rates) of biological treatment over non-biological treatments was presented in revealed analyses. The incremental cost-utility ratios for biologics, compared with standard care, varied significantly between the studies and ranged from US$36,309 to US$456,979. The lowest value was obtained for infliximab and the highest for the treatment scheme including infliximab 5 mg/kg and infliximab 10 mg/kg + adalimumab. The change of utility weights and clinical parameters had the most significant influence on the results of the analysis; the variable related to surgery was the least sensitive. Limited data on the cost-effectiveness of UC therapy were identified. In the majority of studies, the lack of cost-effectiveness was revealed for biologics, which was associated with their high costs. Clinical outcomes are transferable to other countries and could be generalized; however, cost inputs are country-specific and therefore limit the transferability and generalizability of conclusions. The key drivers and variables that showed the greatest effect on the analysis results were utility weights and clinical parameters.

  6. Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations

    PubMed Central

    Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán

    2016-01-01

    Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165

  7. Possible roles of vacuolar H+-ATPase and mitochondrial function in tolerance to air-drying stress revealed by genome-wide screening of Saccharomyces cerevisiae deletion strains.

    PubMed

    Shima, Jun; Ando, Akira; Takagi, Hiroshi

    2008-03-01

    Yeasts used in bread making are exposed to air-drying stress during dried yeast production processes. To clarify the genes required for air-drying tolerance, we performed genome-wide screening using the complete deletion strain collection of diploid Saccharomyces cerevisiae. The screening identified 278 gene deletions responsible for air-drying sensitivity. These genes were classified based on their cellular function and on the localization of their gene products. The results showed that the genes required for air-drying tolerance were frequently involved in mitochondrial functions and in connection with vacuolar H(+)-ATPase, which plays a role in vacuolar acidification. To determine the role of vacuolar acidification in air-drying stress tolerance, we monitored intracellular pH. The results showed that intracellular acidification was induced during air-drying and that this acidification was amplified in a deletion mutant of the VMA2 gene encoding a component of vacuolar H(+)-ATPase, suggesting that vacuolar H(+)-ATPase helps maintain intracellular pH homeostasis, which is affected by air-drying stress. To determine the effects of air-drying stress on mitochondria, we analysed the mitochondrial membrane potential under air-drying stress conditions using MitoTracker. The results showed that mitochondria were extremely sensitive to air-drying stress, suggesting that a mitochondrial function is required for tolerance to air-drying stress. We also analysed the correlation between oxidative-stress sensitivity and air-drying-stress sensitivity. The results suggested that oxidative stress is a critical determinant of sensitivity to air-drying stress, although ROS-scavenging systems are not necessary for air-drying stress tolerance. (c) 2008 John Wiley & Sons, Ltd.

  8. Streptococcus A in paediatric accident and emergency: are rapid streptococcal tests and clinical examination of any help?

    PubMed

    Van Limbergen, J; Kalima, P; Taheri, S; Beattie, T F

    2006-01-01

    Rapid streptococcal tests (RSTs) for streptococcal pharyngitis have made diagnosis at once simpler and more complicated. The American Academy of Pediatrics recommends that all RSTs be confirmed by a follow up throat culture unless local validation has proved the RST to be equally sensitive. To evaluate (a) RST as a single diagnostic tool, compared with RST with or without throat culture; (b) clinical diagnosis and the relative contribution of different symptoms. The study included 213 patients with clinical signs of pharyngitis. Throat swabs were analysed using Quickvue+ Strep A Test; negative RSTs were backed up by throat culture. Thirteen clinical features commonly associated with strep throat were analysed using backward stepwise logistic regression. Positive results (RST or throat culture) were obtained in 33 patients; RST correctly identified 21. Eleven samples were false negative on RST. At a strep throat prevalence of 15.9%, sensitivity of RST was 65.6% (95% CI 46.8% to 81.4%) and specificity 99.4% (96.7% to 99.9%). Sensitivity of clinical diagnosis alone was 57% (34% to 78%) and specificity 71% (61% to 80%). Clinically, only history of sore throat, rash, and pyrexia contributed to the diagnosis of strep throat (p<0.05). The high specificity of RST facilitates early diagnosis of strep throat. However, the low sensitivity of RST does not support its use as a single diagnostic tool. The sensitivity in the present study is markedly different from that reported by the manufacturer. Clinical examination is of limited value in the diagnosis of strep throat. It is important to audit the performance of new diagnostic tests, previously validated in different settings.

  9. Streptococcus A in paediatric accident and emergency: are rapid streptococcal tests and clinical examination of any help?

    PubMed Central

    Van Limbergen, J; Kalima, P; Taheri, S; Beattie, T F

    2006-01-01

    Background Rapid streptococcal tests (RSTs) for streptococcal pharyngitis have made diagnosis at once simpler and more complicated. The American Academy of Pediatrics recommends that all RSTs be confirmed by a follow up throat culture unless local validation has proved the RST to be equally sensitive. Aims To evaluate (a) RST as a single diagnostic tool, compared with RST with or without throat culture; (b) clinical diagnosis and the relative contribution of different symptoms. Methods The study included 213 patients with clinical signs of pharyngitis. Throat swabs were analysed using Quickvue+ Strep A Test; negative RSTs were backed up by throat culture. Thirteen clinical features commonly associated with strep throat were analysed using backward stepwise logistic regression. Results Positive results (RST or throat culture) were obtained in 33 patients; RST correctly identified 21. Eleven samples were false negative on RST. At a strep throat prevalence of 15.9%, sensitivity of RST was 65.6% (95% CI 46.8% to 81.4%) and specificity 99.4% (96.7% to 99.9%). Sensitivity of clinical diagnosis alone was 57% (34% to 78%) and specificity 71% (61% to 80%). Clinically, only history of sore throat, rash, and pyrexia contributed to the diagnosis of strep throat (p<0.05). Conclusion The high specificity of RST facilitates early diagnosis of strep throat. However, the low sensitivity of RST does not support its use as a single diagnostic tool. The sensitivity in the present study is markedly different from that reported by the manufacturer. Clinical examination is of limited value in the diagnosis of strep throat. It is important to audit the performance of new diagnostic tests, previously validated in different settings. PMID:16373800

  10. Advanced Echocardiography in Adult Zebrafish Reveals Delayed Recovery of Heart Function after Myocardial Cryoinjury

    PubMed Central

    Kossack, Mandy; Juergensen, Lonny; Fuchs, Dieter; Katus, Hugo A.; Hassel, David

    2015-01-01

    Translucent zebrafish larvae represent an established model to analyze genetics of cardiac development and human cardiac disease. More recently adult zebrafish are utilized to evaluate mechanisms of cardiac regeneration and by benefiting from recent genome editing technologies, including TALEN and CRISPR, adult zebrafish are emerging as a valuable in vivo model to evaluate novel disease genes and specifically validate disease causing mutations and their underlying pathomechanisms. However, methods to sensitively and non-invasively assess cardiac morphology and performance in adult zebrafish are still limited. We here present a standardized examination protocol to broadly assess cardiac performance in adult zebrafish by advancing conventional echocardiography with modern speckle-tracking analyses. This allows accurate detection of changes in cardiac performance and further enables highly sensitive assessment of regional myocardial motion and deformation in high spatio-temporal resolution. Combining conventional echocardiography measurements with radial and longitudinal velocity, displacement, strain, strain rate and myocardial wall delay rates after myocardial cryoinjury permitted to non-invasively determine injury dimensions and to longitudinally follow functional recovery during cardiac regeneration. We show that functional recovery of cryoinjured hearts occurs in three distinct phases. Importantly, the regeneration process after cryoinjury extends far beyond the proposed 45 days described for ventricular resection with reconstitution of myocardial performance up to 180 days post-injury (dpi). The imaging modalities evaluated here allow sensitive cardiac phenotyping and contribute to further establish adult zebrafish as valuable cardiac disease model beyond the larval developmental stage. PMID:25853735

  11. GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.

    PubMed

    Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi

    2016-09-01

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

  12. Sensitivity and Uncertainty Analysis of Plutonium and Cesium Isotopes in Modeling of BR3 Reactor Spent Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Conant, Andrew; Erickson, Anna; Robel, Martin

    Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less

  13. Long-Term Time Variability in the X-Ray Pulse Shape of the Crab Nebula Pulsar

    NASA Astrophysics Data System (ADS)

    Fazio, Giovanni G.

    2000-01-01

    This is the final performance report for our grant 'Long-Term Time Variability in the X-Ray Pulse Shape of the Crab Nebula Pulsar.' In the first year of this grant, we received the 50,000-second ROSAT (German acronym for X-ray satellite) High Resolution Images (HRI) observation of the Crab Nebula pulsar. We used the data to create a 65-ms-resolution pulse profile and compared it to a similar pulse profile obtained in 1991. No statistically significant differences were found. These results were presented at the January 1998 meeting of the American Astronomical Society. Since then, we have performed more sensitive analyses to search for potential changes in the pulse profile shape between the two data sets. Again, no significant variability was found. In order to augment this long (six-year) baseline data set, we have analyzed archival observations of the Crab Nebula pulsar with the Rossi X-Ray Timing Explorer (RXTE). While these observations have shorter time baselines than the ROSAT data set, their higher signal-to-noise offers similar sensitivity to long-term variability. Again, no significant variations have been found, confirming our ROSAT results. This work was done in collaboration with Prof. Stephen Eikenberry, Cornell University. These analyses will be included in Cornell University graduate student Dae-Sik Moon's doctoral thesis.

  14. Not rare. But, endangered Elemental profiles of three corticolous lichen species on red spruce in Maine. [Usnea subfloridana; Platismatia glauca; Hypogymnia physodes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stubbs, C.S.; Homola, R.H.

    1990-01-01

    Usnea subfloridana Stirton, Platismatia glauca (L.) Club. and Club., and Hypogymnia physodes (L.) Nyl. are lichen species moderately to highly sensitive to air pollutants, including acid deposition and ozone. Some researchers have attributed depauperate populations and local extinctions of these species to poor air quality. Since 1985, areas of Maine annually experienced mean summer rain and fog events of pH 4.5 or lower and ozone levels above national standards. Given this possible threat to these and other pollution sensitive species, baseline elemental analyses for Ca, K, P, Mg, Al, B, Fe, Cu, Mn, Zn, N, S, Na, and Pb weremore » performed in 1986 on coastal and inland populations on Picea rubens L. Elemental analyses were again performed on nontransplanted and transplanted lichens from the same populations in 1988. There were statistically significant differences in elemental profiles between nontransplanted 1986 and 1988 samples for all three species, such as significant decreases in Ca and Mg concentrations, and increases in Al, Cu, Fe, and Zn for U. subfloridana. Elemental concentrations between nontransplanted and transplanted material differed significantly, but no consistent pattern emerged. These results, coupled with other evidence (such as luxuriance and density ratings), suggest that both inland and coastal populations of U. subfloridana on red spruce are experiencing ecophysiological stress.« less

  15. Sensitivity and Uncertainty Analysis of Plutonium and Cesium Isotopes in Modeling of BR3 Reactor Spent Fuel

    DOE PAGES

    Conant, Andrew; Erickson, Anna; Robel, Martin; ...

    2017-02-03

    Nuclear forensics has a broad task to characterize recovered nuclear or radiological material and interpret the results of investigation. One approach to isotopic characterization of nuclear material obtained from a reactor is to chemically separate and perform isotopic measurements on the sample and verify the results with modeling of the sample history, for example, operation of a nuclear reactor. The major actinide plutonium and fission product cesium are commonly measured signatures of the fuel history in a reactor core. This study investigates the uncertainty of the plutonium and cesium isotope ratios of a fuel rod discharged from a research pressurizedmore » water reactor when the location of the sample is not known a priori. A sensitivity analysis showed overpredicted values for the 240Pu/ 239Pu ratio toward the axial center of the rod and revealed a lower probability of the rod of interest (ROI) being on the periphery of the assembly. The uncertainty analysis found the relative errors due to only the rod position and boron concentration to be 17% to 36% and 7% to 15% for the 240Pu/ 239Pu and 137Cs/ 135Cs ratios, respectively. Lastly, this study provides a method for uncertainty quantification of isotope concentrations due to the location of the ROI. Similar analyses can be performed to verify future chemical and isotopic analyses.« less

  16. Using archived ITS data for sensitivity analyses in the estimation of mobile source emissions

    DOT National Transportation Integrated Search

    2000-12-01

    The study described in this paper demonstrates the use of archived ITS data from San Antonio's TransGuide traffic management center (TMC) for sensitivity analyses in the estimation of on-road mobile source emissions. Because of the stark comparison b...

  17. Sensitivity analysis in economic evaluation: an audit of NICE current practice and a review of its use and value in decision-making.

    PubMed

    Andronis, L; Barton, P; Bryan, S

    2009-06-01

    To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.

  18. Skin sensitizer identification by IL-8 secretion and CD86 expression on THP-1 cells.

    PubMed

    Parise, Carolina Bellini; Sá-Rocha, Vanessa Moura; Moraes, Jane Zveiter

    2015-12-25

    Substantial progress has been made in the development of alternative methods for skin sensitization in the last decade in several countries around the world. Brazil is experiencing an increasing concern about using animals for product development, since the publication of the Law 9605/1998, which prohibits the use of animals when an alternative method is available. In this way, an in vitro test to evaluate allergenic potential is a pressing need.This preliminary study started setting the use of myelomonocytic THP-1 cell line, according to the human cell line activation test (h-CLAT), already under validation process. We found that 48-h chemical exposure was necessary to identify 22 out of 23 sensitizers by the analyses of CD86 expression. In addition, the CD54 expression analyses presented a poor efficiency to discriminate sensitizers from non-sensitizers in our conditions. In view of these results, we looked for changes of pro-inflammatory interleukin profile. The IL-8 secretion analyses after 24-h chemical incubation seemed to be an alternative for CD54 expression assessing.Altogether, our findings showed that the combination of the analyses of CD86 expression and IL-8 secretion allowed predicting allergenicity.

  19. Flat tensile specimen design for advanced composites

    NASA Technical Reports Server (NTRS)

    Worthem, Dennis W.

    1990-01-01

    Finite element analyses of flat, reduced gage section tensile specimens with various transition region contours were performed. Within dimensional constraints, such as maximum length, tab region width, gage width, gage length, and minimum tab length, a transition contour radius of 41.9 cm produced the lowest stress values in the specimen transition region. The stresses in the transition region were not sensitive to specimen material properties. The stresses in the tab region were sensitive to specimen composite and/or tab material properties. An evaluation of stresses with different specimen composite and tab material combinations must account for material nonlinearity of both the tab and the specimen composite. Material nonlinearity can either relieve stresses in the composite under the tab or elevate them to cause failure under the tab.

  20. Rapid solution of large-scale systems of equations

    NASA Technical Reports Server (NTRS)

    Storaasli, Olaf O.

    1994-01-01

    The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.

  1. [Diagnosis of septic loosening of hip prosthesis with LeukoScan. SPECT scan with 99mTc-labeled monoclonal antibodies].

    PubMed

    Kaisidis, A; Megas, P; Apostolopoulos, D; Spiridonidis, T; Koumoundourou, D; Zouboulis, P; Lambiris, E; Vassilakos, P

    2005-05-01

    Diagnosis of septic loosening of hip endoprosthesis with antigranulocyte scintigraphy (AGS) was analysed. Twenty-one hip prostheses were studied using laboratory tests and, in cases of elevated values, three-phase bone scan (BS) and AGS. Elective SPECT/CT scans were performed. Histologic and microbiologic exams verified the diagnosis. The AGS analysis revealed sensitivity, specificity and accuracy of value 1, while positive and negative predictive values were also 1. BS showed sensitivity of 1 and specificity of 0.33. In three cases, SPECT/CT scans corroborated the AGS interpretation. This diagnostic algorithm proved effective in the detection of septic loosening of hip prostheses. AGS can be avoided without risk of infection being overlooked.

  2. Performance evaluation of a novel high performance pinhole array detector module using NEMA NU-4 image quality phantom for four head SPECT Imaging

    NASA Astrophysics Data System (ADS)

    Rahman, Tasneem; Tahtali, Murat; Pickering, Mark R.

    2015-03-01

    Radiolabeled tracer distribution imaging of gamma rays using pinhole collimation is considered promising for small animal imaging. The recent availability of various radiolabeled tracers has enhanced the field of diagnostic study and is simultaneously creating demand for high resolution imaging devices. This paper presents analyses to represent the optimized parameters of a high performance pinhole array detector module using two different characteristics phantoms. Monte Carlo simulations using the Geant4 application for tomographic emission (GATE) were executed to assess the performance of a four head SPECT system incorporated with pinhole array collimators. The system is based on a pixelated array of NaI(Tl) crystals coupled to an array of position sensitive photomultiplier tubes (PSPMTs). The detector module was simulated to have 48 mm by 48 mm active area along with different pinhole apertures on a tungsten plate. The performance of this system has been evaluated using a uniform shape cylindrical water phantom along with NEMA NU-4 image quality (IQ) phantom filled with 99mTc labeled radiotracers. SPECT images were reconstructed where activity distribution is expected to be well visualized. This system offers the combination of an excellent intrinsic spatial resolution, good sensitivity and signal-to-noise ratio along with high detection efficiency over an energy range between 20-160 keV. Increasing number of heads in a stationary system configuration offers increased sensitivity at a spatial resolution similar to that obtained with the current SPECT system design with four heads.

  3. Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis

    PubMed Central

    John-Baptiste, A.; Sowerby, L.J.; Chin, C.J.; Martin, J.; Rotenberg, B.W.

    2016-01-01

    Background: When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). Methods: We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. Results: The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Interpretation: Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems. PMID:27975045

  4. Comparing surgical trays with redundant instruments with trays with reduced instruments: a cost analysis.

    PubMed

    John-Baptiste, A; Sowerby, L J; Chin, C J; Martin, J; Rotenberg, B W

    2016-01-01

    When prearranged standard surgical trays contain instruments that are repeatedly unused, the redundancy can result in unnecessary health care costs. Our objective was to estimate potential savings by performing an economic evaluation comparing the cost of surgical trays with redundant instruments with surgical trays with reduced instruments ("reduced trays"). We performed a cost-analysis from the hospital perspective over a 1-year period. Using a mathematical model, we compared the direct costs of trays containing redundant instruments to reduced trays for 5 otolaryngology procedures. We incorporated data from several sources including local hospital data on surgical volume, the number of instruments on redundant and reduced trays, wages of personnel and time required to pack instruments. From the literature, we incorporated instrument depreciation costs and the time required to decontaminate an instrument. We performed 1-way sensitivity analyses on all variables, including surgical volume. Costs were estimated in 2013 Canadian dollars. The cost of redundant trays was $21 806 and the cost of reduced trays was $8803, for a 1-year cost saving of $13 003. In sensitivity analyses, cost savings ranged from $3262 to $21 395, based on the surgical volume at the institution. Variation in surgical volume resulted in a wider range of estimates, with a minimum of $3253 for low-volume to a maximum of $52 012 for high-volume institutions. Our study suggests moderate savings may be achieved by reducing surgical tray redundancy and, if applied to other surgical specialties, may result in savings to Canadian health care systems.

  5. Partially covered self-expandable metal stents versus polyethylene stents for malignant biliary obstruction: A cost-effectiveness analysis

    PubMed Central

    Barkun, Alan N; Adam, Viviane; Martel, Myriam; AlNaamani, Khalid; Moses, Peter L

    2015-01-01

    BACKGROUND/OBJECTIVE: Partially covered self-expandable metal stents (SEMS) and polyethylene stents (PES) are both commonly used in the palliation of malignant biliary obstruction. Although SEMS are significantly more expensive, they are more efficacious than PES. Accordingly, a cost-effectiveness analysis was performed. METHODS: A cost-effectiveness analysis compared the approach of initial placement of PES versus SEMS for the study population. Patients with malignant biliary obstruction underwent an endoscopic retrograde cholangiopancreatography to insert the initial stent. If the insertion failed, a percutaneous transhepatic cholangiogram was performed. If stent occlusion occurred, a PES was inserted at repeat endoscopic retrograde cholangiopancreatography, either in an outpatient setting or after admission to hospital if cholangitis was present. A third-party payer perspective was adopted. Effectiveness was expressed as the likelihood of no occlusion over the one-year adopted time horizon. Probabilities were based on a contemporary randomized clinical trial, and costs were issued from national references. Deterministic and probabilistic sensitivity analyses were performed. RESULTS: A PES-first strategy was both more expensive and less efficacious than an SEMS-first approach. The mean per-patient costs were US$6,701 for initial SEMS and US$20,671 for initial PES, which were associated with effectiveness probabilities of 65.6% and 13.9%, respectively. Sensitivity analyses confirmed the robustness of these results. CONCLUSION: At the time of initial endoscopic drainage for patients with malignant biliary obstruction undergoing palliative stenting, an initial SEMS insertion approach was both more effective and less costly than a PES-first strategy. PMID:26125107

  6. Correlation between the complex PSA/total PSA ratio and the free PSA/total PSA ratio, sensitivity and specificity of both markers for the diagnosis of prostate cancer.

    PubMed

    Pérez-Lanzac-Lorca, A; Barco-Sánchez, A; Romero, E; Martinez-Peinado, A; López-Elorza, F; Sanchez-Sanchez, E; Alvarez-Ossorio-Fernandez, J L; Castiñeiras-Fernández, J

    2013-09-01

    To compare the behaviour of the PSAcomplex/PSAtotal percentage (PSAc%) against the PSA free/PSA total (PSAl%) and analyse both markers for their usefulness in diagnosing prostate cancer. We measured total PSA (PSAt), free PSA (PSAl), complex PSA (PSAc), PSAl% and PSAc% levels in 158 patients. Of these, 98 (62%) were biopsied for presenting PSAt≥3 ng/dl and PSAl%<20, PSAt>10, suspicious rectal examination or suspicious ultrasound node. We performed linear regression and Passing-Bablok regression analyses. The ROC curves were calculated to study the sensitivity and specificity of PSAl% and PSAc% and were compared to each other. The prostate cancer diagnoses were analysed by PSAl% and PSAc% by applying the χ(2) test. The correlation coefficient (r) was good (0.7447, P<.0001), and the index of determination (r(2)) was 0,5. The result of the Passing-Bablok analysis was a slope of 1.658 (1.452 to 1.897) and an intersection of 2.044 (-0,936 to 5.393). The optimal cutoff for PSAl% (≤14.7854) showed a sensitivity of 89.29% [95% CI, 0,642-0,823] and a specificity of 54.29% (95% CI, 0,642-0,823). The optimal cutoff for PSAc% (>89.7796) had a sensitivity of 71.43% (95% CI, 0,616-0,802) and a specificity of 71.43% (95% CI, 0,616-0,802). There were no significant differences when comparing the areas under the curve of both markers (P=.59). The PPV of PSAl% was less than that of PSAc% (45.7% vs. 71%). There was a good correlation between PSAl% and PSAc%. PSAc% has demonstrated greater specificity and efficacy than PSAl% in the diagnosis of prostate cancer. Copyright © 2012 AEU. Published by Elsevier Espana. All rights reserved.

  7. Experiment Evaluation of Bifurcation in Sands

    NASA Technical Reports Server (NTRS)

    Alshibi, Khalid A.; Sture, Stein

    2000-01-01

    The basic principles of bifurcation analysis have been established by several investigators, however several issues remain unresolved, specifically how do stress level, grain size distribution, and boundary conditions affect general bifurcation phenomena in pressure sensitive and dilatant materials. General geometrical and kinematics conditions for moving surfaces of discontinuity was derived and applied to problems of instability of solids. In 1962, the theoretical framework of bifurcation by studying the acceleration waves in elasto-plastic (J2) solids were presented. Bifurcation analysis for more specific forms of constitutive behavior was examined by studying localization in pressure-sensitive, dilatant materials, however, analyses were restricted to plane deformation states only. Bifurcation analyses were presented and applied to predict shear band formations in sand under plane strain condition. The properties of discontinuous bifurcation solutions for elastic-plastic solids under axisymmetric and plane strain loading conditions were studied. The study focused on theory, but also references and comparisons to experiments were made. The current paper includes a presentation of a summary of bifurcation analyses for biaxial and triaxial (axisymmetric) loading conditions. The Coulomb model is implemented using incremental piecewise scheme to predict the constitutive relations and shear band inclination angles. Then, a comprehensive evaluation of bifurcation phenomena is presented based on data from triaxial experiments performed under microgravity conditions aboard the Space Shuttle under very low effective confining pressure (0.05 to 1.30 kPa), in which very high peak friction angles (47 to 75 degrees) and dilatancy angles (30 to 31 degrees) were measured. The evaluation will be extended to include biaxial experiments performed on the same material under low (10 kPa) and moderate (100 kPa) confining pressures. A comparison between the behavior under biaxial and triaxial loading conditions will be presented, and related issues concerning influence of confining pressure will be discussed.

  8. Economic evaluation of osteoporosis liaison service for secondary fracture prevention in postmenopausal osteoporosis patients with previous hip fracture in Japan.

    PubMed

    Moriwaki, K; Noto, S

    2017-02-01

    A model-based cost-effectiveness analysis was performed to evaluate the cost-effectiveness of secondary fracture prevention by osteoporosis liaison service (OLS) relative to no therapy in patients with osteoporosis and a history of hip fracture. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. The purpose of this study was to estimate, from the perspective of Japan's healthcare system, the cost-effectiveness of secondary fracture prevention by OLS relative to no therapy in patients with osteoporosis and a history of hip fracture. A patient-level state transition model was developed to predict lifetime costs and quality-adjusted life years (QALYs) in patients with or without secondary fracture prevention by OLS. The incremental cost-effectiveness ratio (ICER) of secondary fracture prevention compared with no therapy was estimated. Sensitivity analyses were performed to examine the influence of parameter uncertainty on the base case results. Compared with no therapy, secondary fracture prevention in patients aged 65 with T-score of -2.5 resulted in an additional lifetime cost of $3396 per person and conferred an additional 0.118 QALY, resulting in an ICER of $28,880 per QALY gained. Deterministic sensitivity analyses showed that treatment duration and offset time strongly affect the cost-effectiveness of OLS. According to the results of scenario analyses, secondary fracture prevention by OLS was cost-saving compared with no therapy in patients with a family history of hip fracture and high alcohol intake. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. In addition, secondary fracture prevention is less expensive than no therapy in high-risk patients with multiple risk factors.

  9. An Overview of the HST Advanced Camera for Surveys' On-orbit Performance

    NASA Astrophysics Data System (ADS)

    Hartig, G. F.; Ford, H. C.; Illingworth, G. D.; Clampin, M.; Bohlin, R. C.; Cox, C.; Krist, J.; Sparks, W. B.; De Marchi, G.; Martel, A. R.; McCann, W. J.; Meurer, G. R.; Sirianni, M.; Tsvetanov, Z.; Bartko, F.; Lindler, D. J.

    2002-05-01

    The Advanced Camera for Surveys (ACS) was installed in the HST on 7 March 2002 during the fourth servicing mission to the observatory, and is now beginning science operations. The ACS provides HST observers with a considerably more sensitive, higher-resolution camera with wider field and polarimetric, coronagraphic, low-resolution spectrographic and solar-blind FUV capabilities. We review selected results of the early verification and calibration program, comparing the achieved performance with the advertised specifications. Emphasis is placed on the optical characteristics of the camera, including image quality, throughput, geometric distortion and stray-light performance. More detailed analyses of various aspects of the ACS performance are presented in other papers at this meeting. This work was supported by a NASA contract and a NASA grant.

  10. Validation of an automated seizure detection algorithm for term neonates

    PubMed Central

    Mathieson, Sean R.; Stevenson, Nathan J.; Low, Evonne; Marnane, William P.; Rennie, Janet M.; Temko, Andrey; Lightbody, Gordon; Boylan, Geraldine B.

    2016-01-01

    Objective The objective of this study was to validate the performance of a seizure detection algorithm (SDA) developed by our group, on previously unseen, prolonged, unedited EEG recordings from 70 babies from 2 centres. Methods EEGs of 70 babies (35 seizure, 35 non-seizure) were annotated for seizures by experts as the gold standard. The SDA was tested on the EEGs at a range of sensitivity settings. Annotations from the expert and SDA were compared using event and epoch based metrics. The effect of seizure duration on SDA performance was also analysed. Results Between sensitivity settings of 0.5 and 0.3, the algorithm achieved seizure detection rates of 52.6–75.0%, with false detection (FD) rates of 0.04–0.36 FD/h for event based analysis, which was deemed to be acceptable in a clinical environment. Time based comparison of expert and SDA annotations using Cohen’s Kappa Index revealed a best performing SDA threshold of 0.4 (Kappa 0.630). The SDA showed improved detection performance with longer seizures. Conclusion The SDA achieved promising performance and warrants further testing in a live clinical evaluation. Significance The SDA has the potential to improve seizure detection and provide a robust tool for comparing treatment regimens. PMID:26055336

  11. Comprehensive Design Reliability Activities for Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Christenson, R. L.; Whitley, M. R.; Knight, K. C.

    2000-01-01

    This technical publication describes the methodology, model, software tool, input data, and analysis result that support aerospace design reliability studies. The focus of these activities is on propulsion systems mechanical design reliability. The goal of these activities is to support design from a reliability perspective. Paralleling performance analyses in schedule and method, this requires the proper use of metrics in a validated reliability model useful for design, sensitivity, and trade studies. Design reliability analysis in this view is one of several critical design functions. A design reliability method is detailed and two example analyses are provided-one qualitative and the other quantitative. The use of aerospace and commercial data sources for quantification is discussed and sources listed. A tool that was developed to support both types of analyses is presented. Finally, special topics discussed include the development of design criteria, issues of reliability quantification, quality control, and reliability verification.

  12. Structure of the Upper Troposphere-Lower Stratosphere (UTLS) in GEOS-5

    NASA Technical Reports Server (NTRS)

    Pawson, Steven

    2011-01-01

    This study examines the structure of the upper troposphere and lower stratosphere in the GEOS-5 data assimilation system. Near-real time analyses, with a horizontal resolution of one-half or one quarter degree and a vertical resolution of about 1km in the tropopause region are examined with an emphasis on spatial structures at and around the tropopause. The contributions of in-situ observations of temperature and microwave and infrared radiances to the analyses are discussed, with some focus on the interplay between these types of observations. For a historical analysis (Merra) performed with GEOS-5, the impacts of changing observations on the assimilation system are examined in some detail - this documents some aspects of the time dependence of analysis that must be taken into account in the isolation of true geophysical trends. Finally, some sensitivities of the ozone analyses to input data and correlated errors between temperature and ozone are discussed.

  13. Clinical Comparison of At-Home and In-Office Dental Bleaching Procedures: A Randomized Trial of a Split-Mouth Design.

    PubMed

    Machado, Lucas Silveira; Anchieta, Rodolfo Bruniera; dos Santos, Paulo Henrique; Briso, André Luiz; Tovar, Nick; Janal, Malvin N; Coelho, Paulo Guilherme; Sundfeld, Renato Herman

    2016-01-01

    The objective of this split-mouth clinical study was to compare a combination of in-office and at-home dental bleaching with at-home bleaching alone. Two applications of in-office bleaching were performed, with one appointment per week, using 38% hydrogen peroxide. At-home bleaching was performed with or without in-office bleaching using 10% carbamide peroxide in a custom-made tray every night for 2 weeks. The factor studied was the bleaching technique on two levels: Technique 1 (in-office bleaching combined with home bleaching) and Technique 2 (home bleaching only). The response variables were color change, dental sensitivity, morphology, and surface roughness. The maxillary right and left hemiarches of the participants were submitted to in-office placebo treatment and in-office bleaching, respectively (Phase 1), and at-home bleaching (Phase 2) treatment was performed on both hemiarches, characterizing a split-mouth design. Enamel surface changes and roughness were analyzed with scanning electron microscopy and optical interferometry using epoxy replicas. No statistically significant differences were observed between the bleaching techniques for either the visual or the digital analyses. There was a significant difference in dental sensitivity when both dental bleaching techniques were used, with in-office bleaching producing the highest levels of dental sensitivity after the baseline. Microscopic analysis of the morphology and roughness of the enamel surface showed no significant changes between the bleaching techniques. The two techniques produced similar results in color change, and the combination technique produced the highest levels of sensitivity. Neither technique promoted changes in morphology or surface roughness of enamel.

  14. Sensitivity enhancement by chromatographic peak concentration with ultra-high performance liquid chromatography-nuclear magnetic resonance spectroscopy for minor impurity analysis.

    PubMed

    Tokunaga, Takashi; Akagi, Ken-Ichi; Okamoto, Masahiko

    2017-07-28

    High performance liquid chromatography can be coupled with nuclear magnetic resonance (NMR) spectroscopy to give a powerful analytical method known as liquid chromatography-nuclear magnetic resonance (LC-NMR) spectroscopy, which can be used to determine the chemical structures of the components of complex mixtures. However, intrinsic limitations in the sensitivity of NMR spectroscopy have restricted the scope of this procedure, and resolving these limitations remains a critical problem for analysis. In this study, we coupled ultra-high performance liquid chromatography (UHPLC) with NMR to give a simple and versatile analytical method with higher sensitivity than conventional LC-NMR. UHPLC separation enabled the concentration of individual peaks to give a volume similar to that of the NMR flow cell, thereby maximizing the sensitivity to the theoretical upper limit. The UHPLC concentration of compound peaks present at typical impurity levels (5.0-13.1 nmol) in a mixture led to at most three-fold increase in the signal-to-noise ratio compared with LC-NMR. Furthermore, we demonstrated the use of UHPLC-NMR for obtaining structural information of a minor impurity in a reaction mixture in actual laboratory-scale development of a synthetic process. Using UHPLC-NMR, the experimental run times for chromatography and NMR were greatly reduced compared with LC-NMR. UHPLC-NMR successfully overcomes the difficulties associated with analyses of minor components in a complex mixture by LC-NMR, which are problematic even when an ultra-high field magnet and cryogenic probe are used. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Results of an integrated structure-control law design sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Gilbert, Michael G.

    1988-01-01

    Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.

  16. Validation of a patient-centered culturally sensitive health care office staff inventory.

    PubMed

    Tucker, Carolyn M; Wall, Whitney; Marsiske, Michael; Nghiem, Khanh; Roncoroni, Julia

    2015-09-01

    Research suggests that patient-perceived culturally sensitive health care encompasses multiple components of the health care delivery system including the cultural sensitivity of front desk office staff. Despite this, research on culturally sensitive health care focuses almost exclusively on provider behaviors, attitudes, and knowledge. This is due in part to the paucity of instruments available to assess the cultural sensitivity of front desk office staff. Thus, the objective of the present study is to determine the psychometric properties of the pilot Tucker-Culturally Sensitive Health Care Office Staff Inventory-Patient Form (T-CSHCOSI-PF), which is an instrument designed to enable patients to evaluate the patient-defined cultural sensitivity of their front desk office staff. A sample of 1648 adult patients was recruited by staff at 67 health care sites across the United States. These patients anonymously completed the T-CSHCOSI-PF, a demographic data questionnaire, and a patient satisfaction questionnaire. Findings Confirmatory factor analyses of the TCSHCOSI-PF revealed that this inventory has two factors with high internal consistency reliability and validity (Cronbach's αs=0.97 and 0.95). It is concluded that the T-CSHCOSI-PF is a psychometrically strong and useful inventory for assessing the cultural sensitivity of front desk office staff. This inventory can be used to support culturally sensitive health care research, evaluate the job performance of front desk office staff, and aid in the development of trainings designed to improve the cultural sensitivity of these office staff.

  17. Steady-State Thermal-Hydraulics Analyses for the Conversion of BR2 to Low Enriched Uranium Fuel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Licht, J.; Bergeron, A.; Dionne, B.

    The code PLTEMP/ANL version 4.2 was used to perform the steady-state thermal-hydraulic analyses of the BR2 research reactor for conversion from Highly-Enriched to Low Enriched Uranium fuel (HEU and LEU, respectively). Calculations were performed to evaluate different fuel assemblies with respect to the onset of nucleate boiling (ONB), flow instability (FI), critical heat flux (CHF) and fuel temperature at beginning of cycle conditions. The fuel assemblies were characteristic of fresh fuel (0% burnup), highest heat flux (16% burnup), highest power (32% burnup) and highest burnup (46% burnup). Results show that the high heat flux fuel element is limiting for ONB,more » FI, and CHF, for both HEU and LEU fuel, but that the high power fuel element produces similar margin in a few cases. The maximum fuel temperature similarly occurs in both the high heat flux and high power fuel assemblies for both HEU and LEU fuel. A sensitivity study was also performed to evaluate the variation in fuel temperature due to uncertainties in the thermal conductivity degradation associated with burnup.« less

  18. Cross-platform evaluation of commercial real-time SYBR green RT-PCR kits for sensitive and rapid detection of European bat Lyssavirus type 1.

    PubMed

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.

  19. Cross-Platform Evaluation of Commercial Real-Time SYBR Green RT-PCR Kits for Sensitive and Rapid Detection of European Bat Lyssavirus Type 1

    PubMed Central

    Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence

    2015-01-01

    This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274

  20. Visually assessed colour overlay features in shear-wave elastography for breast masses: quantification and diagnostic performance.

    PubMed

    Gweon, Hye Mi; Youk, Ji Hyun; Son, Eun Ju; Kim, Jeong-Ah

    2013-03-01

    To determine whether colour overlay features can be quantified by the standard deviation (SD) of the elasticity measured in shear-wave elastography (SWE) and to evaluate the diagnostic performance for breast masses. One hundred thirty-three breast lesions in 119 consecutive women who underwent SWE before US-guided core needle biopsy or surgical excision were analysed. SWE colour overlay features were assessed using two different colour overlay pattern classifications. Quantitative SD of the elasticity value was measured with the region of interest including the whole breast lesion. For the four-colour overlay pattern, the area under the ROC curve (Az) was 0.947; with a cutoff point between pattern 2 and 3, sensitivity and specificity were 94.4 % and 81.4 %. According to the homogeneity of the elasticity, the Az was 0.887; with a cutoff point between reasonably homogeneous and heterogeneous, sensitivity and specificity were 86.1 % and 82.5 %. For the SD of the elasticity, the Az was 0.944; with a cutoff point of 12.1, sensitivity and specificity were 88.9 % and 89.7 %. The colour overlay features showed significant correlations with the quantitative SD of the elasticity (P < 0.001). The colour overlay features and the SD of the elasticity in SWE showed excellent diagnostic performance and showed good correlations between them.

  1. A comparative study of the sensitivity of diffusion-related parameters obtained from diffusion tensor imaging, diffusional kurtosis imaging, q-space analysis and bi-exponential modelling in the early disease course (24 h) of hyperacute (6 h) ischemic stroke patients.

    PubMed

    Duchêne, Gaëtan; Peeters, Frank; Peeters, André; Duprez, Thierry

    2017-08-01

    To compare the sensitivity and early temporal changes of diffusion parameters obtained from diffusion tensor imaging (DTI), diffusional kurtosis imaging (DKI), q-space analysis (QSA) and bi-exponential modelling in hyperacute stroke patients. A single investigational acquisition allowing the four diffusion analyses was performed on seven hyperacute stroke patients with a 3T system. The percentage change between ipsi- and contralateral regions were compared at admission and 24 h later. Two out of the seven patients were imaged every 6 h during this period. Kurtoses from both DKI and QSA were the most sensitive of the tested diffusion parameters in the few hours following ischemia. An early increase-maximum-decrease pattern of evolution was highlighted during the 24-h period for all parameters proportional to diffusion coefficients. A similar pattern was observed for both kurtoses in only one of two patients. Our comparison was performed using identical diffusion encoding timings and on patients in the same stage of their condition. Although preliminary, our findings confirm those of previous studies that showed enhanced sensitivity of kurtosis. A fine time mapping of diffusion metrics in hyperacute stroke patients was presented which advocates for further investigations on larger animal or human cohorts.

  2. Cost/Effort Drivers and Decision Analysis

    NASA Technical Reports Server (NTRS)

    Seidel, Jonathan

    2010-01-01

    Engineering trade study analyses demand consideration of performance, cost and schedule impacts across the spectrum of alternative concepts and in direct reference to product requirements. Prior to detailed design, requirements are too often ill-defined (only goals ) and prone to creep, extending well beyond the Systems Requirements Review. Though lack of engineering design and definitive requirements inhibit the ability to perform detailed cost analyses, affordability trades still comprise the foundation of these future product decisions and must evolve in concert. This presentation excerpts results of the recent NASA subsonic Engine Concept Study for an Advanced Single Aisle Transport to demonstrate an affordability evaluation of performance characteristics and the subsequent impacts on engine architecture decisions. Applying the Process Based Economic Analysis Tool (PBEAT), development cost, production cost, as well as operation and support costs were considered in a traditional weighted ranking of the following system-level figures of merit: mission fuel burn, take-off noise, NOx emissions, and cruise speed. Weighting factors were varied to ascertain the architecture ranking sensitivities to these performance figures of merit with companion cost considerations. A more detailed examination of supersonic variable cycle engine cost is also briefly presented, with observations and recommendations for further refinements.

  3. Comparative costs and cost-effectiveness of behavioural interventions as part of HIV prevention strategies.

    PubMed

    Hsu, Justine; Zinsou, Cyprien; Parkhurst, Justin; N'Dour, Marguerite; Foyet, Léger; Mueller, Dirk H

    2013-01-01

    Behavioural interventions have been widely integrated in HIV/AIDS social marketing prevention strategies and are considered valuable in settings with high levels of risk behaviours and low levels of HIV/AIDS awareness. Despite their widespread application, there is a lack of economic evaluations comparing different behaviour change communication methods. This paper analyses the costs to increase awareness and the cost-effectiveness to influence behaviour change for five interventions in Benin. Cost and cost-effectiveness analyses used economic costs and primary effectiveness data drawn from surveys. Costs were collected for provider inputs required to implement the interventions in 2009 and analysed by 'person reached'. Cost-effectiveness was analysed by 'person reporting systematic condom use'. Sensitivity analyses were performed on all uncertain variables and major assumptions. Cost-per-person reached varies by method, with public outreach events the least costly (US$2.29) and billboards the most costly (US$25.07). Influence on reported behaviour was limited: only three of the five interventions were found to have a significant statistical correlation with reported condom use (i.e. magazines, radio broadcasts, public outreach events). Cost-effectiveness ratios per person reporting systematic condom use resulted in the following ranking: magazines, radio and public outreach events. Sensitivity analyses indicate rankings are insensitive to variation of key parameters although ratios must be interpreted with caution. This analysis suggests that while individual interventions are an attractive use of resources to raise awareness, this may not translate into a cost-effective impact on behaviour change. The study found that the extensive reach of public outreach events did not seem to influence behaviour change as cost-effectively when compared with magazines or radio broadcasts. Behavioural interventions are context-specific and their effectiveness influenced by a multitude of factors. Further analyses using a quasi-experimental design would be useful to programme implementers and policy makers as they face decisions regarding which HIV prevention activities to prioritize.

  4. Enhanced Photoacoustic Gas Analyser Response Time and Impact on Accuracy at Fast Ventilation Rates during Multiple Breath Washout

    PubMed Central

    Horsley, Alex; Macleod, Kenneth; Gupta, Ruchi; Goddard, Nick; Bell, Nicholas

    2014-01-01

    Background The Innocor device contains a highly sensitive photoacoustic gas analyser that has been used to perform multiple breath washout (MBW) measurements using very low concentrations of the tracer gas SF6. Use in smaller subjects has been restricted by the requirement for a gas analyser response time of <100 ms, in order to ensure accurate estimation of lung volumes at rapid ventilation rates. Methods A series of previously reported and novel enhancements were made to the gas analyser to produce a clinically practical system with a reduced response time. An enhanced lung model system, capable of delivering highly accurate ventilation rates and volumes, was used to assess in vitro accuracy of functional residual capacity (FRC) volume calculation and the effects of flow and gas signal alignment on this. Results 10–90% rise time was reduced from 154 to 88 ms. In an adult/child lung model, accuracy of volume calculation was −0.9 to 2.9% for all measurements, including those with ventilation rate of 30/min and FRC of 0.5 L; for the un-enhanced system, accuracy deteriorated at higher ventilation rates and smaller FRC. In a separate smaller lung model (ventilation rate 60/min, FRC 250 ml, tidal volume 100 ml), mean accuracy of FRC measurement for the enhanced system was minus 0.95% (range −3.8 to 2.0%). Error sensitivity to flow and gas signal alignment was increased by ventilation rate, smaller FRC and slower analyser response time. Conclusion The Innocor analyser can be enhanced to reliably generate highly accurate FRC measurements down at volumes as low as those simulating infant lung settings. Signal alignment is a critical factor. With these enhancements, the Innocor analyser exceeds key technical component recommendations for MBW apparatus. PMID:24892522

  5. Sensitivity analysis of static resistance of slender beam under bending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valeš, Jan

    2016-06-08

    The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.

  6. Prediction of coefficients of thermal expansion for unidirectional composites

    NASA Technical Reports Server (NTRS)

    Bowles, David E.; Tompkins, Stephen S.

    1989-01-01

    Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.

  7. Are biotic indices sensitive to river toxicants? A comparison of metrics based on diatoms and macro-invertebrates.

    PubMed

    Blanco, S; Bécares, E

    2010-03-01

    Biotic indices based on macro-invertebrates and diatoms are frequently used to diagnose ecological quality in watercourses, but few published works have assessed their effectiveness as biomonitors of the concentration of micropollutants. A biological survey performed at 188 sites in the basin of the River Duero in north-western Spain. Nineteen diatom and six macro-invertebrate indices were calculated and compared with the concentrations of 37 different toxicants by means of a correlation analysis. Several chemical variables analysed correlated significantly with at least one biotic index. Sládecek's diatom index and the number of macro-invertebrate families exhibited particularly high correlation coefficients. Methods based on macro-invertebrates performed better in detecting biocides, while diatom indices showed stronger correlations with potentially toxic elements such as heavy metals. All biotic indices, and particularly diatom indices, were especially sensitive to the concentration of fats and oils and trichloroethene. Copyright 2010 Elsevier Ltd. All rights reserved.

  8. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  9. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less

  10. Prosodic constraints on inflected words: an area of difficulty for German-speaking children with specific language impairment?

    PubMed

    Kauschke, Christina; Renner, Lena; Domahs, Ulrike

    2013-08-01

    Recent studies suggest that morphosyntactic difficulties may result from prosodic problems. We therefore address the interface between inflectional morphology and prosody in typically developing children (TD) and children with SLI by testing whether these groups are sensitive to prosodic constraints that guide plural formation in German. A plural elicitation task was designed consisting of 60 words and 20 pseudowords. The performance of 14 German-speaking children with SLI (mean age 7.5) was compared to age-matched controls and to younger children matched for productive vocabulary. TD children performed significantly better than children with SLI. Error analyses revealed that children with SLI produced more forms that did not meet the optimal shape of a noun plural. Beyond the fact that children with SLI have deficits in plural marking, the findings suggest that they also show reduced sensitivity to prosodic requirements. In other words, the prosodic structure of inflected words seems to be vulnerable in children with SLI.

  11. Manned geosynchronous mission requirements and systems analysis study extension. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    1981-01-01

    A study was performed to determine the types of manned missions that will likely be performed in the late 1980's or early 1990's timeframe, to define MOTV configurations which satisfy these missions requirements, and to develop a program plan for its development. Twenty generic missions were originally defined for MOTV but, to simplify the selection process, five of these missions were selected as typical and used as Design Reference Missions. Systems and subsystems requirements were re-examined and sensitivity analyses performed to determine optimum point designs. Turnaround modes were considered to determine the most effective combination of ground based and spaced based activities. A preferred concept for the crew capsule and for the mission mode was developed.

  12. Wechsler Memory Scale-III Faces test performance in patients with mild cognitive impairment and mild Alzheimer's disease.

    PubMed

    Seelye, Adriana M; Howieson, Diane B; Wild, Katherine V; Moore, Mindy Milar; Kaye, Jeffrey A

    2009-08-01

    Little is known about the sensitivity of the Wechsler Memory Scale-Third Edition (WMS-III) Faces subtest to memory impairment associated with mild cognitive impairment (MCI). In this study, Faces performance was examined in 24 MCI patients, 46 mild Alzheimer's disease (AD) patients, and 98 elderly controls. We hypothesized that participants with diagnoses of MCI or AD would be impaired relative to controls on Faces. Analyses showed that AD participants performed significantly worse than MCI and intact participants, although there were no significant differences between MCI and intact participants. Data suggest that brain areas specialized for face recognition memory may be less affected by MCI and mild AD than regions specialized for verbal memory.

  13. Quantitative aspects of inductively coupled plasma mass spectrometry

    PubMed Central

    Wagner, Barbara

    2016-01-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971

  14. Development of a takeoff performance monitoring system. Ph.D. Thesis. Contractor Report, Jan. 1984 - Jun. 1985

    NASA Technical Reports Server (NTRS)

    Srivatsan, Raghavachari; Downing, David R.

    1987-01-01

    Discussed are the development and testing of a real-time takeoff performance monitoring algorithm. The algorithm is made up of two segments: a pretakeoff segment and a real-time segment. One-time imputs of ambient conditions and airplane configuration information are used in the pretakeoff segment to generate scheduled performance data for that takeoff. The real-time segment uses the scheduled performance data generated in the pretakeoff segment, runway length data, and measured parameters to monitor the performance of the airplane throughout the takeoff roll. Airplane and engine performance deficiencies are detected and annunciated. An important feature of this algorithm is the one-time estimation of the runway rolling friction coefficient. The algorithm was tested using a six-degree-of-freedom airplane model in a computer simulation. Results from a series of sensitivity analyses are also included.

  15. Effect of the time of day and queue position in the endoscopic schedule on the performance characteristics of endoscopic ultrasound-guided fine-needle aspiration for diagnosing pancreatic malignancies

    PubMed Central

    Korenblit, Jason; Tholey, Danielle M.; Tolin, Joanna; Loren, David; Kowalski, Thomas; Adler, Douglas G.; Davolos, Julie; Siddiqui, Ali A.

    2016-01-01

    Background and Objectives: Recent reports have indicated that the time of day may impact the detection rate of abnormal cytology on gynecologic cytology samples. The aim of this study was to determine if procedure time or queue position affected the performance characteristics of endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) for diagnosing solid pancreatic malignancies. Patients and Methods: We conducted a retrospective study evaluating patients with solid pancreatic lesions in whom EUS-FNA was performed. Three timing variables were evaluated as surrogate markers for endoscopist fatigue: Procedure start times, morning versus afternoon procedures, and endoscopy queue position. Statistical analyses were performed to determine whether the timing variables predicted performance characteristics of EUS-FNA. Results: We identified 609 patients (mean age: 65.8 years, 52.1% males) with solid pancreatic lesions who underwent EUS-FNA. The sensitivity of EUS-FNA was 100% for procedures that started at 7 AM while cases that started at 4 PM had a sensitivity of 81%. Using start time on a continuous scale, each elapsed hour was associated with a 1.9% decrease in EUS-FNA sensitivity (P = 0.003). Similarly, a 10% reduction in EUS-FNA sensitivity was detected between morning and afternoon procedures (92% vs. 82% respectively, P = 0.0006). A linear regression comparing the procedure start time and diagnostic accuracy revealed a decrease of approximately 1.7% in procedure accuracy for every hour later a procedure was started. A 16% reduction in EUS-FNA accuracy was detected between morning and afternoon procedures (100% vs. 84% respectively, P = 0.0009). When the queue position was assessed, a 2.4% reduction in accuracy was noted for each increase in the queue position (P = 0.013). Conclusion: Sensitivity and diagnostic accuracy of EUS-FNA for solid pancreatic lesions decline with progressively later EUS starting times and increasing numbers of procedures before a given EUS, potentially from endoscopist fatigue and cytotechnologist fatigue. PMID:27080605

  16. Time-series analyses of air pollution and mortality in the United States: a subsampling approach.

    PubMed

    Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie

    2013-01-01

    Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.

  17. The analysis sensitivity to tropical winds from the Global Weather Experiment

    NASA Technical Reports Server (NTRS)

    Paegle, J.; Paegle, J. N.; Baker, W. E.

    1986-01-01

    The global scale divergent and rotational flow components of the Global Weather Experiment (GWE) are diagnosed from three different analyses of the data. The rotational flow shows closer agreement between the analyses than does the divergent flow. Although the major outflow and inflow centers are similarly placed in all analyses, the global kinetic energy of the divergent wind varies by about a factor of 2 between different analyses while the global kinetic energy of the rotational wind varies by only about 10 percent between the analyses. A series of real data assimilation experiments has been performed with the GLA general circulation model using different amounts of tropical wind data during the First Special Observing Period of the Global Weather Experiment. In exeriment 1, all available tropical wind data were used; in the second experiment, tropical wind data were suppressed; while, in the third and fourth experiments, only tropical wind data with westerly and easterly components, respectively, were assimilated. The rotational wind appears to be more sensitive to the presence or absence of tropical wind data than the divergent wind. It appears that the model, given only extratropical observations, generates excessively strong upper tropospheric westerlies. These biases are sufficiently pronounced to amplify the globally integrated rotational flow kinetic energy by about 10 percent and the global divergent flow kinetic energy by about a factor of 2. Including only easterly wind data in the tropics is more effective in controlling the model error than including only westerly wind data. This conclusion is especially noteworthy because approximately twice as many upper tropospheric westerly winds were available in these cases as easterly winds.

  18. Imaging for Appendicitis: Should Radiation-induced Cancer Risks Affect Modality Selection?

    PubMed Central

    Kiatpongsan, Sorapop; Meng, Lesley; Eisenberg, Jonathan D.; Herring, Maurice; Avery, Laura L.; Kong, Chung Yin

    2014-01-01

    Purpose To compare life expectancy (LE) losses attributable to three imaging strategies for appendicitis in adults—computed tomography (CT), ultrasonography (US) followed by CT for negative or indeterminate US results, and magnetic resonance (MR) imaging—by using a decision-analytic model. Materials and Methods In this model, for each imaging strategy, LE losses for 20-, 40-, and 65-year-old men and women were computed as a function of five key variables: baseline cohort LE, test performance, surgical mortality, risk of death from delayed diagnosis (missed appendicitis), and LE loss attributable to radiation-induced cancer death. Appendicitis prevalence, test performance, mortality rates from surgery and missed appendicitis, and radiation doses from CT were elicited from the published literature and institutional data. LE loss attributable to radiation exposure was projected by using a separate organ-specific model that accounted for anatomic coverage during a typical abdominopelvic CT examination. One- and two-way sensitivity analyses were performed to evaluate effects of model input variability on results. Results Outcomes across imaging strategies differed minimally—for example, for 20-year-old men, corresponding LE losses were 5.8 days (MR imaging), 6.8 days (combined US and CT), and 8.2 days (CT). This order was sensitive to differences in test performance but was insensitive to variation in radiation-induced cancer deaths. For example, in the same cohort, MR imaging sensitivity had to be 91% at minimum (if specificity were 100%), and MR imaging specificity had to be 62% at minimum (if sensitivity were 100%) to incur the least LE loss. Conversely, LE loss attributable to radiation exposure would need to decrease by 74-fold for combined US and CT, instead of MR imaging, to incur the least LE loss. Conclusion The specific imaging strategy used to diagnose appendicitis minimally affects outcomes. Paradigm shifts to MR imaging owing to concerns over radiation should be considered only if MR imaging test performance is very high. © RSNA, 2014 PMID:24988435

  19. Pharmacogenetics of clozapine response and induced weight gain: A comprehensive review and meta-analysis.

    PubMed

    Gressier, Florence; Porcelli, Stefano; Calati, Raffaella; Serretti, Alessandro

    2016-02-01

    Clozapine (CLZ) is the prototype atypical antipsychotic and it has many advantages over other antipsychotic drugs. Several data suggest that both CLZ response and induced weight gain are strongly determined by genetic variability. However, results remain mainly inconclusive. We aim to review the literature data about pharmacogenetics studies on CLZ efficacy, focusing on pharmacodynamic genes. Further, we performed meta-analyses on response when at least three studies for each polymorphism were available. Sensitivity analyses were conducted on Caucasian population when feasible. Electronic literature search was performed to identify pertinent studies published until May 2014 using PubMed, ISI Web of Knowledge and PsycINFO databases. For meta-analyses, data were entered and analyzed through RevMan version 5.2 using a random-effect model. Our literature search yielded 9266 articles on CLZ; among these, we identified 59 pertinent pharmacogenetic studies. Genotype data were retrieved for 14 polymorphisms in 9 genes. Among these, we had available data from at least three independent samples for 8 SNPs in 6 genes to perform meta-analyses: DRD2 rs1799732, DRD3 rs6280, HTR2A rs6313, rs6311, rs6314, HTR2C rs6318, HTR3A rs1062613, TNFa rs1800629. Although literature review provided conflicting results, in meta-analyses three genetic variants within serotonin genes resulted associated to CLZ response: rs6313 and rs6314 within HTR2A gene and rs1062613 within HT3A gene. On the other hand, no clear finding emerged for CLZ-induced weight gain. Our results suggest a possible serotonergic modulation of CLZ clinical response. Copyright © 2015 Elsevier B.V. and ECNP. All rights reserved.

  20. Evaluation of the BD Max Cdiff assay for the detection of toxigenic Clostridium difficile in human stool specimens.

    PubMed

    Putsathit, Papanin; Morgan, Justin; Bradford, Damien; Engelhardt, Nelly; Riley, Thomas V

    2015-02-01

    The Becton Dickinson (BD) PCR-based GeneOhm Cdiff assay has demonstrated a high sensitivity and specificity for detecting Clostridium difficile. Recently, the BD Max platform, using the same principles as BD GeneOhm, has become available in Australia. This study aimed to investigate the sensitivity and specificity of BD Max Cdiff assay for the detection of toxigenic C. difficile in an Australian setting. Between December 2013 and January 2014, 406 stool specimens from 349 patients were analysed with the BD Max Cdiff assay. Direct and enrichment toxigenic culture were performed on bioMérieux ChromID C. difficile agar as a reference method. isolates from specimens with discrepant results were further analysed with an in-house PCR to detect the presence of toxin genes. The overall prevalence of toxigenic C. difficile was 7.2%. Concordance between the BD Max assay and enrichment culture was 98.5%. The sensitivity, specificity, positive predictive value and negative predictive value for the BD Max Cdiff assay were 95.5%, 99.0%, 87.5% and 99.7%, respectively, when compared to direct culture, and 91.7%, 99.0%, 88.0% and 99.4%, respectively, when compared to enrichment culture. The new BD Max Cdiff assay appeared to be an excellent platform for rapid and accurate detection of toxigenic C. difficile.

  1. Cost-effectiveness analysis of cochlear dose reduction by proton beam therapy for medulloblastoma in childhood.

    PubMed

    Hirano, Emi; Fuji, Hiroshi; Onoe, Tsuyoshi; Kumar, Vinay; Shirato, Hiroki; Kawabuchi, Koichi

    2014-03-01

    The aim of this study is to evaluate the cost-effectiveness of proton beam therapy with cochlear dose reduction compared with conventional X-ray radiotherapy for medulloblastoma in childhood. We developed a Markov model to describe health states of 6-year-old children with medulloblastoma after treatment with proton or X-ray radiotherapy. The risks of hearing loss were calculated on cochlear dose for each treatment. Three types of health-related quality of life (HRQOL) of EQ-5D, HUI3 and SF-6D were used for estimation of quality-adjusted life years (QALYs). The incremental cost-effectiveness ratio (ICER) for proton beam therapy compared with X-ray radiotherapy was calculated for each HRQOL. Sensitivity analyses were performed to model uncertainty in these parameters. The ICER for EQ-5D, HUI3 and SF-6D were $21 716/QALY, $11 773/QALY, and $20 150/QALY, respectively. One-way sensitivity analyses found that the results were sensitive to discount rate, the risk of hearing loss after proton therapy, and costs of proton irradiation. Cost-effectiveness acceptability curve analysis revealed a 99% probability of proton therapy being cost effective at a societal willingness-to-pay value. Proton beam therapy with cochlear dose reduction improves health outcomes at a cost that is within the acceptable cost-effectiveness range from the payer's standpoint.

  2. The sensitivity of the ESA DELTA model

    NASA Astrophysics Data System (ADS)

    Martin, C.; Walker, R.; Klinkrad, H.

    Long-term debris environment models play a vital role in furthering our understanding of the future debris environment, and in aiding the determination of a strategy to preserve the Earth orbital environment for future use. By their very nature these models have to make certain assumptions to enable informative future projections to be made. Examples of these assumptions include the projection of future traffic, including launch and explosion rates, and the methodology used to simulate break-up events. To ensure a sound basis for future projections, and consequently for assessing the effectiveness of various mitigation measures, it is essential that the sensitivity of these models to variations in key assumptions is examined. The DELTA (Debris Environment Long Term Analysis) model, developed by QinetiQ for the European Space Agency, allows the future projection of the debris environment throughout Earth orbit. Extensive analyses with this model have been performed under the auspices of the ESA Space Debris Mitigation Handbook and following the recent upgrade of the model to DELTA 3.0. This paper draws on these analyses to present the sensitivity of the DELTA model to changes in key model parameters and assumptions. Specifically the paper will address the variation in future traffic rates, including the deployment of satellite constellations, and the variation in the break-up model and criteria used to simulate future explosion and collision events.

  3. A review of 241 subjects who were patch tested twice: could fragrance mix I cause active sensitization?

    PubMed

    White, J M L; McFadden, J P; White, I R

    2008-03-01

    Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.

  4. Adaptive statistical iterative reconstruction and Veo: assessment of image quality and diagnostic performance in CT colonography at various radiation doses.

    PubMed

    Yoon, Min A; Kim, Se Hyung; Lee, Jeong Min; Woo, Hyoun Sik; Lee, Eun Sun; Ahn, Se Jin; Han, Joon Koo

    2012-01-01

    To evaluate the diagnostic performance of computed tomography (CT) colonography (CTC) reconstructed with different levels of adaptive statistical iterative reconstruction (ASiR, GE Healthcare) and Veo (model-based iterative reconstruction, GE Healthcare) at various tube currents in detection of polyps in porcine colon phantoms. Five porcine colon phantoms with 46 simulated polyps were scanned at different radiation doses (10, 30, and 50 mA s) and were reconstructed using filtered back projection (FBP), ASiR (20%, 40%, and 60%) and Veo. Eleven data sets for each phantom (10-mA s FBP, 10-mA s 20% ASiR, 10-mA s 40% ASiR, 10-mA s 60% ASiR, 10-mA s Veo, 30-mA s FBP, 30-mA s 20% ASiR, 30-mA s 40% ASiR, 30-mA s 60% ASiR, 30-mA s Veo, and 50-mA s FBP) yielded a total of 55 data sets. Polyp detection sensitivity and confidence level of 2 independent observers were evaluated with the McNemar test, the Fisher exact test, and receiver operating characteristic curve analysis. Comparative analyses of overall image quality score, measured image noise, and interpretation time were also performed. Per-polyp detection sensitivities and specificities were highest in 10-mA s Veo, 30-mA s FBP, 30-mA s 60% ASiR, and 50-mA s FBP (sensitivity, 100%; specificity, 100%). The area-under-the-curve values for the overall performance of each data set was also highest (1.000) at 50-mA s FBP, 30-mA s FBP, 30-mA s 60% ASiR, and 10-mA s Veo. Images reconstructed with ASiR showed statistically significant improvement in per-polyp detection sensitivity as the percent level of per-polyp sensitivity increased (10-mA s FBP vs 10-mA s 20% ASiR, P = 0.011; 10-mA s FBP vs 10-mA s 40% ASiR, P = 0.000; 10-mA s FBP vs 10-mA s 60% ASiR, P = 0.000; 10-mA s 20% ASiR vs 40% ASiR, P = 0.034). Overall image quality score was highest at 30-mA s Veo and 50-mA s FBP. The quantitative measurement of the image noise was lowest at 30-mA s Veo and second lowest at 10-mA s Veo. There was a trend of decrease in time required for image interpretation as the percent level of ASiR increased, and ASiR or Veo was used instead of FBP. However, differences from comparative analyses of overall image quality score, measured image noise, and interpretation time did not reach statistical significance. ASiR and Veo showed improved diagnostic performance with excellent sensitivity and specificity with less image noise and good image quality compared with FBP reconstruction of same radiation dose. Our study confirmed feasibility of low-dose CTC with iterative reconstruction as a promising screening tool with excellent diagnostic performance similar to that of the standard-dose CTC with FBP.

  5. Cost-Utility Analysis: Sartorius Flap versus Negative Pressure Therapy for Infected Vascular Groin Graft Managment.

    PubMed

    Chatterjee, Abhishek; Macarios, David; Griffin, Leah; Kosowski, Tomasz; Pyfer, Bryan J; Offodile, Anaeze C; Driscoll, Daniel; Maddali, Sirish; Attwood, John

    2015-11-01

    Sartorius flap coverage and adjunctive negative pressure wound therapy (NPWT) have been described in managing infected vascular groin grafts with varying cost and clinical success. We performed a cost-utility analysis comparing sartorius flap with NPWT in managing an infected vascular groin graft. A literature review compiling outcomes for sartorius flap and NPWT interventions was conducted from peer-reviewed journals in MEDLINE (PubMed) and EMBASE. Utility scores were derived from expert opinion and used to estimate quality-adjusted life years (QALYs). Medicare current procedure terminology and diagnosis-related groups codes were used to assess the costs for successful graft salvage with the associated complications. Incremental cost-effectiveness was assessed at $50,000/QALY, and both univariate and probabilistic sensitivity analyses were conducted to assess robustness of the conclusions. Thirty-two studies were used pooling 384 patients (234 sartorius flaps and 150 NPWT). NPWT had better clinical outcomes (86.7% success rate, 0.9% minor complication rate, and 13.3% major complication rate) than sartorius flap (81.6% success rate, 8.0% minor complication rate, and 18.4% major complication rate). NPWT was less costly ($12,366 versus $23,516) and slightly more effective (12.06 QALY versus 12.05 QALY) compared with sartorius flap. Sensitivity analyses confirmed the robustness of the base case findings; NPWT was either cost-effective at $50,000/QALY or dominated sartorius flap in 81.6% of all probabilistic sensitivity analyses. In our cost-utility analysis, use of adjunctive NPWT, along with debridement and antibiotic treatment, for managing infected vascular groin graft wounds was found to be a more cost-effective option when compared with sartorius flaps.

  6. Cost-Utility Analysis: Sartorius Flap versus Negative Pressure Therapy for Infected Vascular Groin Graft Managment

    PubMed Central

    Macarios, David; Griffin, Leah; Kosowski, Tomasz; Pyfer, Bryan J.; Offodile, Anaeze C.; Driscoll, Daniel; Maddali, Sirish; Attwood, John

    2015-01-01

    Background: Sartorius flap coverage and adjunctive negative pressure wound therapy (NPWT) have been described in managing infected vascular groin grafts with varying cost and clinical success. We performed a cost–utility analysis comparing sartorius flap with NPWT in managing an infected vascular groin graft. Methods: A literature review compiling outcomes for sartorius flap and NPWT interventions was conducted from peer-reviewed journals in MEDLINE (PubMed) and EMBASE. Utility scores were derived from expert opinion and used to estimate quality-adjusted life years (QALYs). Medicare current procedure terminology and diagnosis-related groups codes were used to assess the costs for successful graft salvage with the associated complications. Incremental cost-effectiveness was assessed at $50,000/QALY, and both univariate and probabilistic sensitivity analyses were conducted to assess robustness of the conclusions. Results: Thirty-two studies were used pooling 384 patients (234 sartorius flaps and 150 NPWT). NPWT had better clinical outcomes (86.7% success rate, 0.9% minor complication rate, and 13.3% major complication rate) than sartorius flap (81.6% success rate, 8.0% minor complication rate, and 18.4% major complication rate). NPWT was less costly ($12,366 versus $23,516) and slightly more effective (12.06 QALY versus 12.05 QALY) compared with sartorius flap. Sensitivity analyses confirmed the robustness of the base case findings; NPWT was either cost-effective at $50,000/QALY or dominated sartorius flap in 81.6% of all probabilistic sensitivity analyses. Conclusion: In our cost–utility analysis, use of adjunctive NPWT, along with debridement and antibiotic treatment, for managing infected vascular groin graft wounds was found to be a more cost-effective option when compared with sartorius flaps. PMID:26893991

  7. Segregation of face sensitive areas within the fusiform gyrus using global signal regression? A study on amygdala resting-state functional connectivity.

    PubMed

    Kruschwitz, Johann D; Meyer-Lindenberg, Andreas; Veer, Ilya M; Wackerhagen, Carolin; Erk, Susanne; Mohnke, Sebastian; Pöhland, Lydia; Haddad, Leila; Grimm, Oliver; Tost, Heike; Romanczuk-Seiferth, Nina; Heinz, Andreas; Walter, Martin; Walter, Henrik

    2015-10-01

    The application of global signal regression (GSR) to resting-state functional magnetic resonance imaging data and its usefulness is a widely discussed topic. In this article, we report an observation of segregated distribution of amygdala resting-state functional connectivity (rs-FC) within the fusiform gyrus (FFG) as an effect of GSR in a multi-center-sample of 276 healthy subjects. Specifically, we observed that amygdala rs-FC was distributed within the FFG as distinct anterior versus posterior clusters delineated by positive versus negative rs-FC polarity when GSR was performed. To characterize this effect in more detail, post hoc analyses revealed the following: first, direct overlays of task-functional magnetic resonance imaging derived face sensitive areas and clusters of positive versus negative amygdala rs-FC showed that the positive amygdala rs-FC cluster corresponded best with the fusiform face area, whereas the occipital face area corresponded to the negative amygdala rs-FC cluster. Second, as expected from a hierarchical face perception model, these amygdala rs-FC defined clusters showed differential rs-FC with other regions of the visual stream. Third, dynamic connectivity analyses revealed that these amygdala rs-FC defined clusters also differed in their rs-FC variance across time to the amygdala. Furthermore, subsample analyses of three independent research sites confirmed reliability of the effect of GSR, as revealed by similar patterns of distinct amygdala rs-FC polarity within the FFG. In this article, we discuss the potential of GSR to segregate face sensitive areas within the FFG and furthermore discuss how our results may relate to the functional organization of the face-perception circuit. © 2015 Wiley Periodicals, Inc.

  8. Biogenic volatile organic compound analyses by PTR-TOF-MS: Calibration, humidity effect and reduced electric field dependency.

    PubMed

    Pang, Xiaobing

    2015-06-01

    Green leaf volatiles (GLVs) emitted by plants after stress or damage induction are a major part of biogenic volatile organic compounds (BVOCs). Proton transfer reaction time-of-flight mass spectrometry (PTR-TOF-MS) is a high-resolution and sensitive technique for in situ GLV analyses, while its performance is dramatically influenced by humidity, electric field, etc. In this study the influence of gas humidity and the effect of reduced field (E/N) were examined in addition to measuring calibration curves for the GLVs. Calibration curves measured for seven of the GLVs in dry air were linear, with sensitivities ranging from 5 to 10 ncps/ppbv (normalized counts per second/parts per billion by volume). The sensitivities for most GLV analyses were found to increase by between 20% and 35% when the humidity of the sample gas was raised from 0% to 70% relative humidity (RH) at 21°C, with the exception of (E)-2-hexenol. Product ion branching ratios were also affected by humidity, with the relative abundance of the protonated molecular ions and higher mass fragment ions increasing with humidity. The effect of reduced field (E/N) on the fragmentation of GLVs was examined in the drift tube of the PTR-TOF-MS. The structurally similar GLVs are acutely susceptible to fragmentation following ionization and the fragmentation patterns are highly dependent on E/N. Overall the measured fragmentation patterns contain sufficient information to permit at least partial separation and identification of the isomeric GLVs by looking at differences in their fragmentation patterns at high and low E/N. Copyright © 2015. Published by Elsevier B.V.

  9. Economic Evaluation of Apixaban for the Prevention of Stroke in Non-Valvular Atrial Fibrillation in the Netherlands

    PubMed Central

    Stevanović, Jelena; Pompen, Marjolein; Le, Hoa H.; Rozenbaum, Mark H.; Tieleman, Robert G.; Postma, Maarten J.

    2014-01-01

    Background Stroke prevention is the main goal of treating patients with atrial fibrillation (AF). Vitamin-K antagonists (VKAs) present an effective treatment in stroke prevention, however, the risk of bleeding and the requirement for regular coagulation monitoring are limiting their use. Apixaban is a novel oral anticoagulant associated with significantly lower hazard rates for stroke, major bleedings and treatment discontinuations, compared to VKAs. Objective To estimate the cost-effectiveness of apixaban compared to VKAs in non-valvular AF patients in the Netherlands. Methods Previously published lifetime Markov model using efficacy data from the ARISTOTLE and the AVERROES trial was modified to reflect the use of oral anticoagulants in the Netherlands. Dutch specific costs, baseline population stroke risk and coagulation monitoring levels were incorporated. Univariate, probabilistic sensitivity and scenario analyses on the impact of different coagulation monitoring levels were performed on the incremental cost-effectiveness ratio (ICER). Results Treatment with apixaban compared to VKAs resulted in an ICER of €10,576 per quality adjusted life year (QALY). Those findings correspond with lower number of strokes and bleedings associated with the use of apixaban compared to VKAs. Univariate sensitivity analyses revealed model sensitivity to the absolute stroke risk with apixaban and treatment discontinuations risks with apixaban and VKAs. The probability that apixaban is cost-effective at a willingness-to-pay threshold of €20,000/QALY was 68%. Results of the scenario analyses on the impact of different coagulation monitoring levels were quite robust. Conclusions In patients with non-valvular AF, apixaban is likely to be a cost-effective alternative to VKAs in the Netherlands. PMID:25093723

  10. Cost-effectiveness of pharmacist-participated warfarin therapy management in Thailand.

    PubMed

    Saokaew, Surasak; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn; Nathisuwan, Surakit; Sukonthasarn, Apichard; Jeanpeerapong, Napawan

    2013-10-01

    Although pharmacist-participated warfarin therapy management (PWTM) is well established, the economic evaluation of PWTM is still lacking particularly in Asia-Pacific region. The objective of this study was to estimate the cost-effectiveness of PWTM in Thailand using local data where available. A Markov model was used to compare lifetime costs and quality-adjusted life years (QALYs) accrued to patients receiving warfarin therapy through PWTM or usual care (UC). The model was populated with relevant information from both health care system and societal perspectives. Input data were obtained from literatures and database analyses. Incremental cost-effectiveness ratios (ICERs) were presented as year 2012 values. A base-case analysis was performed for patients at age 45 years old. Sensitivity analyses including one-way and probabilistic sensitivity analyses were constructed to determine the robustness of the findings. From societal perspective, PWTM and UC results in 39.5 and 38.7 QALY, respectively. Thus, PWTM increase QALY by 0.79, and increase costs by 92,491 THB (3,083 USD) compared with UC (ICER 116,468 THB [3,882.3 USD] per QALY gained). While, from health care system perspective, PWTM also results in 0.79 QALY, and increase costs by 92,788 THB (3,093 USD) compared with UC (ICER 116,842 THB [3,894.7 USD] per QALY gained). Thus, PWTM was cost-effective compared with usual care, assuming willingness-to-pay (WTP) of 150,000 THB/QALY. Results were sensitive to the discount rate and cost of clinic set-up. Our finding suggests that PWTM is a cost-effective intervention. Policy-makers may consider our finding as part of information in their decision-making for implementing this strategy into healthcare benefit package. Further updates when additional data available are needed. © 2013.

  11. Financial analysis of cardiovascular wellness program provided to self-insured company from pharmaceutical care provider's perspective.

    PubMed

    Wilson, Justin B; Osterhaus, Matt C; Farris, Karen B; Doucette, William R; Currie, Jay D; Bullock, Tammy; Kumbera, Patty

    2005-01-01

    To perform a retrospective financial analysis on the implementation of a self-insured company's wellness program from the pharmaceutical care provider's perspective and conduct sensitivity analyses to estimate costs versus revenues for pharmacies without resident pharmacists, program implementation for a second employer, the second year of the program, and a range of pharmacist wages. Cost-benefit and sensitivity analyses. Self-insured employer with headquarters in Canton, N.C. 36 employees at facility in Clinton, Iowa. Pharmacist-provided cardiovascular wellness program. Costs and revenues collected from pharmacy records, including pharmacy purchasing records, billing records, and pharmacists' time estimates. All costs and revenues were calculated for the development and first year of the intervention program. Costs included initial and follow-up screening supplies, office supplies, screening/group presentation time, service provision time, documentation/preparation time, travel expenses, claims submission time, and administrative fees. Revenues included initial screening revenues, follow-up screening revenues, group session revenues, and Heart Smart program revenues. For the development and first year of Heart Smart, net benefit to the pharmacy (revenues minus costs) amounted to dollars 2,413. All sensitivity analyses showed a net benefit. For pharmacies without a resident pharmacist, the net benefit was dollars 106; for Heart Smart in a second employer, the net benefit was dollars 6,024; for the second year, the projected net benefit was dollars 6,844; factoring in a lower pharmacist salary, the net benefit was dollars 2,905; and for a higher pharmacist salary, the net benefit was dollars 1,265. For the development and first year of Heart Smart, the revenues of the wellness program in a self-insured company outweighed the costs.

  12. Cost Effectiveness of Influenza Vaccine Choices in Children Aged 2–8 Years in the U.S.

    PubMed Central

    Smith, Kenneth J.; Raviotta, Jonathan M.; DePasse, Jay V.; Brown, Shawn T.; Shim, Eunha; Nowalk, Mary Patricia; Zimmerman, Richard K.

    2015-01-01

    Introduction Prior evidence found live attenuated influenza vaccine (LAIV) more effective than inactivated influenza vaccine (IIV) in children aged 2–8 years, leading CDC in 2014 to prefer LAIV use in this group. However, since 2013, LAIV has not proven superior, leading CDC in 2015 to rescind their LAIV preference statement. Here, the cost effectiveness of preferred LAIV use compared with IIV in children aged 2–8 years is estimated. Methods A Markov model estimated vaccination strategy cost effectiveness in terms of cost per quality-adjusted life year gained. Base case assumptions were: equal vaccine uptake, IIV use when LAIV was not indicated (in 11.7% of the cohort), and no indirect vaccination effects. Sensitivity analyses included estimates of indirect effects from both equation- and agent-based models. Analyses were performed in 2014–2015. Results Using prior effectiveness data in children aged 2–8 years (LAIV=83%, IIV=64%), preferred LAIV use was less costly and more effective than IIV (dominant), with results sensitive only to LAIV and IIV effectiveness variation. Using 2014–2015 U.S. effectiveness data (LAIV=0%, IIV=15%), IIV was dominant. In two-way sensitivity analyses, LAIV use was cost saving over the entire range of IIV effectiveness (0%–81%) when absolute LAIV effectiveness was >7.1% higher than IIV, but never cost saving when absolute LAIV effectiveness was <3.5% higher than IIV. Conclusions Results support CDC’s decision to no longer prefer LAIV use and provide guidance on effectiveness differences between influenza vaccines that might lead to preferential LAIV recommendation for children aged 2–8 years. PMID:26868283

  13. VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry

    PubMed Central

    Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.

    2015-01-01

    Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002

  14. Sensitivity analysis of a coupled hydrodynamic-vegetation model using the effectively subsampled quadratures method

    USGS Publications Warehouse

    Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis

    2017-01-01

    Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.

  15. When does mass screening for open neural tube defects in low-risk pregnancies result in cost savings?

    PubMed Central

    Tosi, L L; Detsky, A S; Roye, D P; Morden, M L

    1987-01-01

    Using a decision analysis model, we estimated the savings that might be derived from a mass prenatal screening program aimed at detecting open neural tube defects (NTDs) in low-risk pregnancies. Our baseline analysis showed that screening v. no screening could be expected to save approximately $8 per pregnancy given a cost of $7.50 for the maternal serum alpha-feto-protein (MSAFP) test and a cost of $42,507 for hospital and rehabilitation services for the first 10 years of life for a child with spina bifida. When a more liberal estimate of the costs of caring for such a child was used, the savings with the screening program were more substantial. We performed extensive sensitivity analyses, which showed that the savings were somewhat sensitive to the cost of the MSAFP test and highly sensitive to the specificity (but not the sensitivity) of the test. A screening program for NTDs in low-risk pregnancies may result in substantial savings in direct health care costs if the screening protocol is followed rigorously and efficiently. PMID:2433011

  16. Sensitivity of Rayleigh wave ellipticity and implications for surface wave inversion

    NASA Astrophysics Data System (ADS)

    Cercato, Michele

    2018-04-01

    The use of Rayleigh wave ellipticity has gained increasing popularity in recent years for investigating earth structures, especially for near-surface soil characterization. In spite of its widespread application, the sensitivity of the ellipticity function to the soil structure has been rarely explored in a comprehensive and systematic manner. To this end, a new analytical method is presented for computing the sensitivity of Rayleigh wave ellipticity with respect to the structural parameters of a layered elastic half-space. This method takes advantage of the minor decomposition of the surface wave eigenproblem and is numerically stable at high frequency. This numerical procedure allowed to retrieve the sensitivity for typical near surface and crustal geological scenarios, pointing out the key parameters for ellipticity interpretation under different circumstances. On this basis, a thorough analysis is performed to assess how ellipticity data can efficiently complement surface wave dispersion information in a joint inversion algorithm. The results of synthetic and real-world examples are illustrated to analyse quantitatively the diagnostic potential of the ellipticity data with respect to the soil structure, focusing on the possible sources of misinterpretation in data inversion.

  17. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  18. A survey of design methods for failure detection in dynamic systems

    NASA Technical Reports Server (NTRS)

    Willsky, A. S.

    1975-01-01

    A number of methods for the detection of abrupt changes (such as failures) in stochastic dynamical systems were surveyed. The class of linear systems were emphasized, but the basic concepts, if not the detailed analyses, carry over to other classes of systems. The methods surveyed range from the design of specific failure-sensitive filters, to the use of statistical tests on filter innovations, to the development of jump process formulations. Tradeoffs in complexity versus performance are discussed.

  19. Computer Program for Assessing the Economic Feasibility of Solar Energy for Single Family Residences and Light Commercial Applications

    NASA Technical Reports Server (NTRS)

    Forney, J. A.; Walker, D.; Lanier, M.

    1979-01-01

    Computer program, SHCOST, was used to perform economic analyses of operational test sites. The program allows consideration of the economic parameters which are important to the solar system user. A life cycle cost and cash flow comparison is made between a solar heating system and a conventional system. The program assists in sizing the solar heating system. A sensitivity study and plot capability allow the user to select the most cost effective system configuration.

  20. Optics of retinal oil droplets: a model of light collection and polarization detection in the avian retina.

    PubMed

    Young, S R; Martin, G R

    1984-01-01

    A wave optical model was used to analyse the scattering properties of avian retinal oil droplets. Computations for the near field region showed that oil droplets perform significant light collection in cone photoreceptors and so enhance outer segment photon capture rates. Scattering by the oil droplet of the principal cone of a double cone pair, combined with accessory cone dichroic absorption under conditions of transverse illumination, may mediate avian polarization sensitivity.

  1. Wide field/planetary camera optics study. [for the large space telescope

    NASA Technical Reports Server (NTRS)

    1979-01-01

    Design feasibility of the baseline optical design concept was established for the wide field/planetary camera (WF/PC) and will be used with the space telescope (ST) to obtain high angular resolution astronomical information over a wide field. The design concept employs internal optics to relay the ST image to a CCD detector system. Optical design performance predictions, sensitivity and tolerance analyses, manufacturability of the optical components, and acceptance testing of the two mirror Cassegrain relays are discussed.

  2. Space shuttle navigation analysis

    NASA Technical Reports Server (NTRS)

    Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.

    1976-01-01

    A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.

  3. Easily constructed spectroelectrochemical cell for batch and flow injection analyses.

    PubMed

    Flowers, Paul A; Maynor, Margaret A; Owens, Donald E

    2002-02-01

    The design and performance of an easily constructed spectroelectrochemical cell suitable for batch and flow injection measurements are described. The cell is fabricated from a commercially available 5-mm quartz cuvette and employs 60 ppi reticulated vitreous carbon as the working electrode, resulting in a reasonable compromise between optical sensitivity and thin-layer electrochemical behavior. The spectroelectrochemical traits of the cell in both batch and flow modes were evaluated using aqueous ferricyanide and compare favorably to those reported previously for similar cells.

  4. MAG-EPA resolves lung inflammation in an allergic model of asthma.

    PubMed

    Morin, C; Fortin, S; Cantin, A M; Rousseau, É

    2013-09-01

    Asthma is a chronic disease characterized by airways hyperresponsiveness, inflammation and airways remodelling involving reversible bronchial obstruction. Omega-3 fatty acids and their derivatives are known to reduce inflammation in several tissues including lung. The effects of eicosapentaenoic acid monoacylglyceride (MAG-EPA), a newly synthesized EPA derivative, were determined on the resolution of lung inflammation and airway hyperresponsiveness in an in vivo model of allergic asthma. Ovalbumin (OVA)-sensitized guinea-pigs were treated or not with MAG-EPA administered per os. Isometric tension measurements, histological analyses, homogenate preparation for Western blot experiments or total RNA extraction for RT-PCR were performed to assess the effect of MAG-EPA treatments. Mechanical tension measurements revealed that oral MAG-EPA treatments reduced methacholine (MCh)-induced bronchial hyperresponsiveness in OVA-sensitized guinea-pigs. Moreover, MAG-EPA treatments also decreased Ca(2+) hypersensitivity of bronchial smooth muscle. Histological analyses and leucocyte counts in bronchoalveolar lavages revealed that oral MAG-EPA treatments led to less inflammatory cell recruitment in the lung of OVA-sensitized guinea-pigs when compared with lungs from control animals. Results also revealed a reduction in mucin production and MUC5AC expression level in OVA-sensitized animals treated with MAG-EPA. Following MAG-EPA treatments, the transcript levels of pro-inflammatory markers such as IL-5, eotaxin, IL-13 and IL-4 were markedly reduced. Moreover, per os MAG-EPA administrations reduced COX2 over-expression in OVA-sensitized animals. We demonstrate that MAG-EPA reduces airway hyperresponsiveness and lung inflammation in OVA-sensitized animals, a finding consistent with a decrease in IL-4, IL-5, IL-13, COX-2 and MUC5AC expression levels in the lung. The present data suggest that MAG-EPA represents a new potential therapeutic strategy for resolving inflammation in allergic asthma. © 2013 John Wiley & Sons Ltd.

  5. Self-perceived weather sensitivity and joint pain in older people with osteoarthritis in six European countries: results from the European Project on OSteoArthritis (EPOSA)

    PubMed Central

    2014-01-01

    Background People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Methods Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. Results The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Conclusions Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA. PMID:24597710

  6. Self-perceived weather sensitivity and joint pain in older people with osteoarthritis in six European countries: results from the European Project on OSteoArthritis (EPOSA).

    PubMed

    Timmermans, Erik J; van der Pas, Suzan; Schaap, Laura A; Sánchez-Martínez, Mercedes; Zambon, Sabina; Peter, Richard; Pedersen, Nancy L; Dennison, Elaine M; Denkinger, Michael; Castell, Maria Victoria; Siviero, Paola; Herbolsheimer, Florian; Edwards, Mark H; Otero, Angel; Deeg, Dorly J H

    2014-03-05

    People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA.

  7. Sensitivity analysis of conservative and reactive stream transient storage models applied to field data from multiple-reach experiments

    USGS Publications Warehouse

    Gooseff, M.N.; Bencala, K.E.; Scott, D.T.; Runkel, R.L.; McKnight, Diane M.

    2005-01-01

    The transient storage model (TSM) has been widely used in studies of stream solute transport and fate, with an increasing emphasis on reactive solute transport. In this study we perform sensitivity analyses of a conservative TSM and two different reactive solute transport models (RSTM), one that includes first-order decay in the stream and the storage zone, and a second that considers sorption of a reactive solute on streambed sediments. Two previously analyzed data sets are examined with a focus on the reliability of these RSTMs in characterizing stream and storage zone solute reactions. Sensitivities of simulations to parameters within and among reaches, parameter coefficients of variation, and correlation coefficients are computed and analyzed. Our results indicate that (1) simulated values have the greatest sensitivity to parameters within the same reach, (2) simulated values are also sensitive to parameters in reaches immediately upstream and downstream (inter-reach sensitivity), (3) simulated values have decreasing sensitivity to parameters in reaches farther downstream, and (4) in-stream reactive solute data provide adequate data to resolve effective storage zone reaction parameters, given the model formulations. Simulations of reactive solutes are shown to be equally sensitive to transport parameters and effective reaction parameters of the model, evidence of the control of physical transport on reactive solute dynamics. Similar to conservative transport analysis, reactive solute simulations appear to be most sensitive to data collected during the rising and falling limb of the concentration breakthrough curve. ?? 2005 Elsevier Ltd. All rights reserved.

  8. Performance of loop-mediated isothermal amplification (LAMP) for the diagnosis of malaria among malaria suspected pregnant women in Northwest Ethiopia.

    PubMed

    Tegegne, Banchamlak; Getie, Sisay; Lemma, Wossenseged; Mohon, Abu Naser; Pillai, Dylan R

    2017-01-19

    Malaria is a major public health problem and an important cause of maternal and infant morbidity in sub-Saharan Africa, including Ethiopia. Early and accurate diagnosis of malaria with effective treatment is the best strategy for prevention and control of complications during pregnancy and infant morbidity and mortality. However, laboratory diagnosis has relied on the identification of malaria parasites and parasite antigens in peripheral blood using Giemsa-stained microscopy or rapid diagnostic tests (RDTs) which lack analytical and clinical sensitivity. The aim of this study was to evaluate the performance of loop-mediated isothermal amplification (LAMP) for the diagnosis of malaria among malaria suspected pregnant women in Northwest Ethiopia. A cross sectional study was conducted from January to April 2016. Pregnant women (n = 87) suspected of having malaria at six health centres were enrolled. A venous blood sample was collected from each study subject, and analysed for Plasmodium parasites by microscopy, RDT, and LAMP. Diagnostic accuracy outcome measures (sensitivity, specificity, predictive values, and Kappa scores) of microscopy, RDT and LAMP were compared to nested polymerase chain reaction (nPCR) as the gold standard. Specimen processing and reporting times were documented. Using nPCR as the gold standard technique, the sensitivity of microscopy and RDT was 90 and 70%, and the specificity was 98.7 and 97.4%, respectively. LAMP assay was 100% sensitive and 93.5% specific compared to nPCR. This study showed higher sensitivity of LAMP compared to microscopy and RDT for the detection of malaria in pregnancy. Increased sensitivity and ease of use with LAMP in point-of-care testing for malaria in pregnancy was noted. LAMP warrants further evaluation in intermittent screening and treatment programmes in pregnancy.

  9. Sources of sensitization, cross-reactions, and occupational sensitization to topical anaesthetics among general dermatology patients.

    PubMed

    Jussi, Liippo; Lammintausta, Kaija

    2009-03-01

    Contact sensitization to local anaesthetics is often from topical medicaments. Occupational sensitization to topical anaesthetics may occur in certain occupations. The aim of the study was to analyse the occurrence of contact sensitization to topical anaesthetics in general dermatology patients. Patch testing with topical anaesthetics was carried out in 620 patients. Possible sources of sensitization and the clinical histories of the patients are analysed. Positive patch test reactions to one or more topical anaesthetics were seen in 25/620 patients. Dibucaine reactions were most common (20/25), and lidocaine sensitization was seen in two patients. Six patients had reactions to ester-type and/or amide-type anaesthetics concurrently. Local preparations for perianal conditions were the most common sensitizers. One patient had developed occupational sensitization to procaine with multiple cross-reactions and with concurrent penicillin sensitization from procaine penicillin. Dibucaine-containing perianal medicaments are the major source of contact sensitization to topical anaesthetics. Although sensitization to multiple anaesthetics can be seen, cross-reactions are possible. Contact sensitization to lidocaine is not common, and possible cross-reactions should be determined when reactions to lidocaine are seen. Occupational procaine sensitization from veterinary medicaments is a risk among animal workers.

  10. Rice tolerance to suboptimal low temperatures relies on the maintenance of the photosynthetic capacity.

    PubMed

    Gazquez, Ayelén; Vilas, Juan Manuel; Colman Lerner, Jorge Esteban; Maiale, Santiago Javier; Calzadilla, Pablo Ignacio; Menéndez, Ana Bernardina; Rodríguez, Andrés Alberto

    2018-06-01

    The purpose of this research was to identify differences between two contrasting rice cultivars in their response to suboptimal low temperatures stress. A transcriptomic analysis of the seedlings was performed and results were complemented with biochemical and physiological analyses. The microarray analysis showed downregulation of many genes related with PSII and particularly with the oxygen evolving complex in the sensitive cultivar IR50. Complementary studies indicated that the PSII performance, the degree of oxygen evolving complex coupling with the PSII core and net photosynthetic rate diminished in this cultivar in response to the stress. However, the tolerant cultivar Koshihikari was able to maintain its energy equilibrium by sustaining the photosynthetic capacity. The increase of oleic acid in Koshihikari could be related with membrane remodelling of the chloroplasts and hence contribute to tolerance. Overall, these results work as a ground for future analyses that look forward to characterize possible mechanisms to tolerate this stress. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  11. Bearing tester data compilation, analysis, and reporting and bearing math modeling

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.

  12. Comparison of solar-thermal and fossil total-energy systems for selected industrial applications

    NASA Astrophysics Data System (ADS)

    Pine, G. D.

    1980-06-01

    Economic analyses of a conventional system and total energy systems based on phosphoric acid fuel cells, diesel piston engines, and central receiver solar thermal systems were performed for each of four industrial applications; a concrete block plant in Arizona, a fluid milk processing plant in California, a sugar beet processing plant in Colorado, and a meat packing plant in Texas. A series of sensitivity analyses was performed to show the effects of variations in fuel price, system size, cost of capital, and system initial cost. Solar total energy systems (STES) are more capital intensive than the other systems, and significant economies of scale are associated with the STES. If DOE solar system cost goals are met, STES can compete with the other systems for facilities with electrical demands greater than two or three megawatts, but STES are not competitive for smaller facilities. Significant energy resource savings, especially of oil and gas, resulted from STES implementation in the four industries.

  13. Detection of martian amino acids by chemical derivatization coupled to gas chromatography: in situ and laboratory analysis.

    PubMed

    Rodier, C; Vandenabeele-Trambouze, O; Sternberg, R; Coscia, D; Coll, P; Szopa, C; Raulin, F; Vidal-Madjar, C; Cabane, M; Israel, G; Grenier-Loustalot, M F; Dobrijevic, M; Despois, D

    2001-01-01

    If there is, or ever was, life in our solar system beyond the Earth, Mars is the most likely place to search for. Future space missions will have then to take into account the detection of prebiotic molecules or molecules of biological significance such as amino acids. Techniques of analysis used for returned samples have to be very sensitive and avoid any chemical or biological contamination whereas in situ techniques have to be automated, fast and low energy consuming. Several possible methods could be used for in situ amino acid analyses on Mars, but gas chromatography would likely be the most suitable. Returned samples could be analyzed by any method in routine laboratory use such as gas chromatography, already successfully performed for analyses of organic matter including amino acids from martian meteorites. The derivatization step, which volatilizes amino acids to perform both in situ and laboratory analysis by gas chromatography, is discussed here. c2001 COSPAR. Published by Elsevier Science Ltd. All rights reserved.

  14. Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations

    NASA Technical Reports Server (NTRS)

    Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.

    1998-01-01

    This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.

  15. Association between response rates and survival outcomes in patients with newly diagnosed multiple myeloma. A systematic review and meta-regression analysis.

    PubMed

    Mainou, Maria; Madenidou, Anastasia-Vasiliki; Liakos, Aris; Paschos, Paschalis; Karagiannis, Thomas; Bekiari, Eleni; Vlachaki, Efthymia; Wang, Zhen; Murad, Mohammad Hassan; Kumar, Shaji; Tsapas, Apostolos

    2017-06-01

    We performed a systematic review and meta-regression analysis of randomized control trials to investigate the association between response to initial treatment and survival outcomes in patients with newly diagnosed multiple myeloma (MM). Response outcomes included complete response (CR) and the combined outcome of CR or very good partial response (VGPR), while survival outcomes were overall survival (OS) and progression-free survival (PFS). We used random-effect meta-regression models and conducted sensitivity analyses based on definition of CR and study quality. Seventy-two trials were included in the systematic review, 63 of which contributed data in meta-regression analyses. There was no association between OS and CR in patients without autologous stem cell transplant (ASCT) (regression coefficient: .02, 95% confidence interval [CI] -0.06, 0.10), in patients undergoing ASCT (-.11, 95% CI -0.44, 0.22) and in trials comparing ASCT with non-ASCT patients (.04, 95% CI -0.29, 0.38). Similarly, OS did not correlate with the combined metric of CR or VGPR, and no association was evident between response outcomes and PFS. Sensitivity analyses yielded similar results. This meta-regression analysis suggests that there is no association between conventional response outcomes and survival in patients with newly diagnosed MM. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Cost-effectiveness of chronic hepatitis C treatment with thymosin alpha-1.

    PubMed

    García-Contreras, Fernando; Nevárez-Sida, Armando; Constantino-Casas, Patricia; Abud-Bastida, Fernando; Garduño-Espinosa, Juan

    2006-07-01

    More than one million individuals in Mexico are infected with hepatitis C virus (HCV), and 80% are at risk for developing a chronic infection that could lead to hepatic cirrhosis and other complications that impact quality of life and institutional costs. The objective of the study was to determine the most cost-effective treatment against HCV among the following: peginterferon, peginterferon plus ribavirin, peginterferon plus ribavirin plus thymosin, and no treatment. We carried out cost-effectiveness analysis using the institutional perspective, including a 45-year time frame and a 3% discount rate for costs and effectiveness. We employed a Bayesian-focused decision tree and a Markov model. One- and two-way sensitivity analyses were performed, as well as threshold-oriented and probabilistic analyses, and we obtained acceptability curves and net health benefits. Triple therapy (peginterferon plus ribavirin plus thymosin alpha-1) was dominant with lower cost and higher utility in relationship with peginterferon + ribavirin option, peginterferon alone and no-treatment option. In triple therapy the cost per unit of success was of 1,908 [USD/quality-adjusted life years (QALY)] compared with peginterferon plus ribavirin 2,277/QALY, peginterferon alone 2,929/QALY, and no treatment 4,204/QALY. Sensitivity analyses confirmed the robustness of the base case. Peginterferon plus ribavirin plus thymosin alpha-1 option was dominant (lowest cost and highest effectiveness). Using no drug was the most expensive and least effective option.

  17. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study

    PubMed Central

    Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M. Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan

    2016-01-01

    Abstract Objective: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Study sample: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. Design: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. Results: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R2 = 0.40). Conclusions: All LEVEL 2 factors are important theoretically as well as for clinical assessment. PMID:27589015

  18. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study.

    PubMed

    Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan

    2016-11-01

    The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). All LEVEL 2 factors are important theoretically as well as for clinical assessment.

  19. Tumor-suppressive effects of natural-type interferon-β through CXCL10 in melanoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Hikaru; Nobeyama, Yoshimasa, E-mail: nobederm@jikei.ac.jp; Nakagawa, Hidemi

    2015-08-21

    Introduction: Type 1 interferon is in widespread use as adjuvant therapy to inhibit melanoma progression. Considering the tumor-suppressive effects of local administration of interferon-β (IFN-β) on lymphatic metastasis, the present study was conducted to identify melanoma-suppressive molecules that are up-regulated by IFN-β treatment of lymphatic endothelial cells. Materials and methods: Lymphatic endothelial cells, fibroblasts, and melanoma cells were treated with natural-type IFN-β, and melanoma cells were treated with CXCL10. Genome-wide oligonucleotide microarray analysis was performed using lymphatic endothelial cells with or without IFN-β treatment. Quantitative real-time reverse transcription-PCR and an enzyme-linked immunosorbent assay were performed to examine CXCL10 expression. Amore » proliferation assay was performed to examine the effects of IFN-β and CXCL10 in melanoma cells. Results: Genome-wide microarray analyses detected CXCL10 as a gene encoding a secretory protein that was up-regulated by IFN-β in lymphatic endothelial cells. IFN-β treatment significantly induced CXCL10 in dermal lymphatic endothelial cells and melanoma cells that are highly sensitive to IFN-β. CXCL10 reduced melanoma cell proliferation in IFN-β-sensitive cells as well as resistant cells. Melanoma cells in which CXCL10 was knocked down were sensitive to IFN-β. CXCR3-B, which encodes the CXCL10 receptor, was up-regulated in melanoma cells with high sensitivity to IFN-β and down-regulated in melanoma cells with medium to low sensitivity. Conclusions: Our data suggest that IFN-β suppresses proliferation and metastasis from the local lymphatic system and melanoma cells via CXCL10. Down-regulation of CXCR3-B by IFN-β may be associated with resistance to IFN-β. - Highlights: • We search melanoma-suppressive molecules induced by IFN-β. • IFN-β induces a high amount of CXCL10 from lymphatic endothelial cells. • CXCL10 induction level in melanoma cells is correlated with the sensitivity to IFN-β. • CXCL10 reduces proliferation in IFN-β-sensitive cells as well as resistant cells. • CXCR3-B is down-regulated by IFN-β exclusively in IFN-β-resistant cells.« less

  20. Balancing data sharing requirements for analyses with data sensitivity

    USGS Publications Warehouse

    Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.

    2007-01-01

    Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.

  1. Reliability and performance evaluation of systems containing embedded rule-based expert systems

    NASA Technical Reports Server (NTRS)

    Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.

    1989-01-01

    A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.

  2. 18F PET with florbetaben for the early diagnosis of Alzheimer's disease dementia and other dementias in people with mild cognitive impairment (MCI).

    PubMed

    Martínez, Gabriel; Vernooij, Robin Wm; Fuentes Padilla, Paulina; Zamora, Javier; Flicker, Leon; Bonfill Cosp, Xavier

    2017-11-22

    18 F-florbetaben uptake by brain tissue, measured by positron emission tomography (PET), is accepted by regulatory agencies like the Food and Drug Administration (FDA) and the European Medicine Agencies (EMA) for assessing amyloid load in people with dementia. Its added value is mainly demonstrated by excluding Alzheimer's pathology in an established dementia diagnosis. However, the National Institute on Aging and Alzheimer's Association (NIA-AA) revised the diagnostic criteria for Alzheimer's disease and confidence in the diagnosis of mild cognitive impairment (MCI) due to Alzheimer's disease may be increased when using some amyloid biomarkers tests like 18 F-florbetaben. These tests, added to the MCI core clinical criteria, might increase the diagnostic test accuracy (DTA) of a testing strategy. However, the DTA of 18 F-florbetaben to predict the progression from MCI to Alzheimer's disease dementia (ADD) or other dementias has not yet been systematically evaluated. To determine the DTA of the 18 F-florbetaben PET scan for detecting people with MCI at time of performing the test who will clinically progress to ADD, other forms of dementia (non-ADD), or any form of dementia at follow-up. The most recent search for this review was performed in May 2017. We searched MEDLINE (OvidSP), Embase (OvidSP), PsycINFO (OvidSP), BIOSIS Citation Index (Thomson Reuters Web of Science), Web of Science Core Collection, including the Science Citation Index (Thomson Reuters Web of Science) and the Conference Proceedings Citation Index (Thomson Reuters Web of Science), LILACS (BIREME), CINAHL (EBSCOhost), ClinicalTrials.gov (https://clinicaltrials.gov), and the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP) (http://www.who.int/ictrp/search/en/). We also searched ALOIS, the Cochrane Dementia & Cognitive Improvement Group's specialised register of dementia studies (http://www.medicine.ox.ac.uk/alois/). We checked the reference lists of any relevant studies and systematic reviews, and performed citation tracking using the Science Citation Index to identify any additional relevant studies. No language or date restrictions were applied to electronic searches. We included studies that had prospectively defined cohorts with any accepted definition of MCI at time of performing the test and the use of 18 F-florbetaben scan to evaluate the DTA of the progression from MCI to ADD or other forms of dementia. In addition, we only selected studies that applied a reference standard for Alzheimer's dementia diagnosis, for example, the National Institute of Neurological and Communicative Disorders and Stroke and the Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) or Diagnostic and Statistical Manual of Mental Disorders-IV (DSM-IV) criteria. We screened all titles and abstracts identified in electronic-database searches. Two review authors independently selected studies for inclusion and extracted data to create two-by-two tables, showing the binary test results cross-classified with the binary reference standard. We used these data to calculate sensitivities, specificities, and their 95% confidence intervals. Two independent assessors performed quality assessment using the QUADAS-2 tool plus some additional items to assess the methodological quality of the included studies. Progression from MCI to ADD, any other form of dementia, and any form of dementia was evaluated in one study (Ong 2015). It reported data on 45 participants at four years of follow-up; 21 participants met NINCDS-ADRDA criteria for Alzheimer's disease dementia at four years of follow-up, the proportion converting to ADD was 47% of the 45 participants, and 11% of the 45 participants met criteria for other types of dementias (three cases of FrontoTemporal Dementia (FTD), one of Dementia with Lewy body (DLB), and one of Progressive Supranuclear Palsy (PSP)). We considered the study to be at high risk of bias in the domains of the reference standard, flow, and timing (QUADAS-2). MCI to ADD; 18 F-florbetaben PET scan analysed visually: the sensitivity was 100% (95% confidence interval (CI) 84% to 100%) and the specificity was 83% (95% CI 63% to 98%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 100% (95% CI 84% to 100%) and the specificity was 88% (95% CI 68% to 97%) for the diagnosis of ADD at follow-up (n = 45, 1 study). MCI to any other form of dementia (non-ADD); 18 F-florbetaben PET scan analysed visually: the sensitivity was 0% (95% CI 0% to 52%) and the specificity was 38% (95% CI 23% to 54%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 0% (95% CI 0% to 52%) and the specificity was 40% (95% CI 25% to 57%) for the diagnosis of any other form of dementia at follow-up (n = 45, 1 study). MCI to any form of dementia; 18 F-florbetaben PET scan analysed visually: the sensitivity was 81% (95% CI 61% to 93%) and the specificity was 79% (95% CI 54% to 94%) (n = 45, 1 study). Analysed quantitatively: the sensitivity was 81% (95% CI 61% to 93%) and the specificity was 84% (95% CI 60% to 97%) for the diagnosis of any form of dementia at follow-up (n = 45, 1 study). Although we were able to calculate one estimation of DTA in, especially, the prediction of progression from MCI to ADD at four years follow-up, the small number of participants implies imprecision of sensitivity and specificity estimates. We cannot make any recommendation regarding the routine use of 18 F-florbetaben in clinical practice based on one single study with 45 participants. 18 F-florbetaben has high financial costs, therefore, clearly demonstrating its DTA and standardising the process of the 18 F-florbetaben modality are important prior to its wider use.

  3. Screening for alcohol use disorders and at-risk drinking in the general population: psychometric performance of three questionnaires.

    PubMed

    Rumpf, Hans-Jürgen; Hapke, Ulfert; Meyer, Christian; John, Ulrich

    2002-01-01

    Most screening questionnaires are developed in clinical settings and there are few data on their performance in the general population. This study provides data on the area under the receiver-operating characteristic (ROC) curve, sensitivity, specificity, and internal consistency of the Alcohol Use Disorders Identification Test (AUDIT), the consumption questions of the AUDIT (AUDIT-C) and the Lübeck Alcohol Dependence and Abuse Screening Test (LAST) among current drinkers (n = 3551) of a general population sample in northern Germany. Alcohol dependence and misuse according to DSM-IV and at-risk drinking served as gold standards to assess sensitivity and specificity and were assessed with the Munich-Composite Diagnostic Interview (M-CIDI). AUDIT and LAST showed insufficient sensitivity for at-risk drinking and alcohol misuse using standard cut-off scores, but satisfactory detection rates for alcohol dependence. The AUDIT-C showed low specificity in all criterion groups with standard cut-off. Adjusted cut-points are recommended. Among a subsample of individuals with previous general hospital admission in the last year, all questionnaires showed higher internal consistency suggesting lower reliability in non-clinical samples. In logistic regression analyses, having had a hospital admission increased the sensitivity in detecting any criterion group of the LAST, and the number of recent general practice visits increased the sensitivity of the AUDIT in detecting alcohol misuse. Women showed lower scores and larger areas under the ROC curves. It is concluded that setting specific instruments (e.g. primary care or general population) or adjusted cut-offs should be used.

  4. The Sensitivity of Genetic Connectivity Measures to Unsampled and Under-Sampled Sites

    PubMed Central

    Koen, Erin L.; Bowman, Jeff; Garroway, Colin J.; Wilson, Paul J.

    2013-01-01

    Landscape genetic analyses assess the influence of landscape structure on genetic differentiation. It is rarely possible to collect genetic samples from all individuals on the landscape and thus it is important to assess the sensitivity of landscape genetic analyses to the effects of unsampled and under-sampled sites. Network-based measures of genetic distance, such as conditional genetic distance (cGD), might be particularly sensitive to sampling intensity because pairwise estimates are relative to the entire network. We addressed this question by subsampling microsatellite data from two empirical datasets. We found that pairwise estimates of cGD were sensitive to both unsampled and under-sampled sites, and FST, Dest, and deucl were more sensitive to under-sampled than unsampled sites. We found that the rank order of cGD was also sensitive to unsampled and under-sampled sites, but not enough to affect the outcome of Mantel tests for isolation by distance. We simulated isolation by resistance and found that although cGD estimates were sensitive to unsampled sites, by increasing the number of sites sampled the accuracy of conclusions drawn from landscape genetic analyses increased, a feature that is not possible with pairwise estimates of genetic differentiation such as FST, Dest, and deucl. We suggest that users of cGD assess the sensitivity of this measure by subsampling within their own network and use caution when making extrapolations beyond their sampled network. PMID:23409155

  5. Differences in Performance Among Test Statistics for Assessing Phylogenomic Model Adequacy.

    PubMed

    Duchêne, David A; Duchêne, Sebastian; Ho, Simon Y W

    2018-05-18

    Statistical phylogenetic analyses of genomic data depend on models of nucleotide or amino acid substitution. The adequacy of these substitution models can be assessed using a number of test statistics, allowing the model to be rejected when it is found to provide a poor description of the evolutionary process. A potentially valuable use of model-adequacy test statistics is to identify when data sets are likely to produce unreliable phylogenetic estimates, but their differences in performance are rarely explored. We performed a comprehensive simulation study to identify test statistics that are sensitive to some of the most commonly cited sources of phylogenetic estimation error. Our results show that, for many test statistics, traditional thresholds for assessing model adequacy can fail to reject the model when the phylogenetic inferences are inaccurate and imprecise. This is particularly problematic when analysing loci that have few variable informative sites. We propose new thresholds for assessing substitution model adequacy and demonstrate their effectiveness in analyses of three phylogenomic data sets. These thresholds lead to frequent rejection of the model for loci that yield topological inferences that are imprecise and are likely to be inaccurate. We also propose the use of a summary statistic that provides a practical assessment of overall model adequacy. Our approach offers a promising means of enhancing model choice in genome-scale data sets, potentially leading to improvements in the reliability of phylogenomic inference.

  6. Highly sensitive biological sensor based on photonic crystal fiber

    NASA Astrophysics Data System (ADS)

    Azzam, Shaimaa I. H.; Hameed, Mohamed F.; Obayya, S. S. A.

    2014-05-01

    A photonic crystal fiber (PCF) surface plasmon resonance (SPR) based sensor is proposed and analysed. The proposed sensor consists of microuidic slots enclosing a dodecagonal layer of air holes cladding and a central air hole. The sensor can perform analyte detection using both HEx 11 and HEy 11 modes with a relatively high sensitivities up to 4000 nm=RIU and 3000 nm=RIU and resolutions of 2.5×10-5 RIU-1 and 3.33×10-5 RIU-1 with HEx11 and HEy11, respectively, with regards to spectral interrogation which to our knowledge are higher than those reported in the literature. Moreover, the structure of the suggested sensor is simple with no fabrication complexities which makes it easy to fabricate with standard PCF fabrication technologies.

  7. A dynamic growth model of vegetative soya bean plants: model structure and behaviour under varying root temperature and nitrogen concentration

    NASA Technical Reports Server (NTRS)

    Lim, J. T.; Wilkerson, G. G.; Raper, C. D. Jr; Gold, H. J.

    1990-01-01

    A differential equation model of vegetative growth of the soya bean plant (Glycine max (L.) Merrill cv. Ransom') was developed to account for plant growth in a phytotron system under variation of root temperature and nitrogen concentration in nutrient solution. The model was tested by comparing model outputs with data from four different experiments. Model predictions agreed fairly well with measured plant performance over a wide range of root temperatures and over a range of nitrogen concentrations in nutrient solution between 0.5 and 10.0 mmol NO3- in the phytotron environment. Sensitivity analyses revealed that the model was most sensitive to changes in parameters relating to carbohydrate concentration in the plant and nitrogen uptake rate.

  8. A fully battery-powered inexpensive spectrophotometric system for high-sensitivity point-of-care analysis on a microfluidic chip

    PubMed Central

    Dou, Maowei; Lopez, Juan; Rios, Misael; Garcia, Oscar; Xiao, Chuan; Eastman, Michael

    2016-01-01

    A cost-effective battery-powered spectrophotometric system (BASS) was developed for quantitative point-of-care (POC) analysis on a microfluidic chip. By using methylene blue as a model analyte, we first compared the performance of the BASS with a commercial spectrophotometric system, and further applied the BASS for loop-mediated isothermal amplification (LAMP) detection and subsequent quantitative nucleic acid analysis which exhibited a comparable limit of detection to that of Nanodrop. Compared to the commercial spectrophotometric system, our spectrophotometric system is lower-cost, consumes less reagents, and has a higher detection sensitivity. Most importantly, it does not rely on external power supplies. All these features make our spectrophotometric system highly suitable for a variety of POC analyses, such as field detection. PMID:27143408

  9. The impact of Medicaid payer status on hospitalizations in nursing homes

    PubMed Central

    Cai, Shubing; Miller, Susan C.; Nelson, Dallas L.; Mukamel, Dana B.

    2015-01-01

    Objectives To examine the association between payer status (Medicaid versus private-pay) and the risk of hospitalizations among long-term stay nursing home (NH) residents who reside in the same facility. Data and study population The 2007–2010 national Medicare Claims and the Minimum Data Set were linked. We identified newly admitted NH residents who became long-stayers and then followed them for 180 days. Analyses Three dichotomous outcomes – all-cause, discretionary and nondiscretionary hospitalizations during the follow-up period – were defined. Linear probability model with facility fixed-effects and robust standard errors were used to examine the within-facility difference in hospitalizations between Medicaid and private-pay residents. A set of sensitivity analyses were performed to examine the robustness of the findings. Results The prevalence of all-cause hospitalization during a 180-day follow-up period was 23.3% among Medicaid residents compared to 21.6% among private-pay residents. After accounting for individual characteristics and facility effects, the probability of any all-cause hospitalization was 1.8 percentage point (P<0.01) higher for Medicaid residents than for private-pay residents within the same facility. We also found Medicaid residents were more likely to be hospitalized for discretionary conditions (5% increase in the likelihood of discretionary hospitalizations), but not for non-discretionary conditions. The findings from the sensitivity analyses were consistent with the main analyses. Conclusion Observed higher hospitalization rates for Medicaid NH residents are at least in part driven by the financial incentive NHs have to hospitalize Medicaid residents. PMID:26067881

  10. Enhanced Lipidome Coverage in Shotgun Analyses by using Gas-Phase Fractionation

    NASA Astrophysics Data System (ADS)

    Nazari, Milad; Muddiman, David C.

    2016-11-01

    A high resolving power shotgun lipidomics strategy using gas-phase fractionation and data-dependent acquisition (DDA) was applied toward comprehensive characterization of lipids in a hen ovarian tissue in an untargeted fashion. Using this approach, a total of 822 unique lipids across a diverse range of lipid categories and classes were identified based on their MS/MS fragmentation patterns. Classes of glycerophospholipids and glycerolipids, such as glycerophosphocholines (PC), glycerophosphoethanolamines (PE), and triglycerides (TG), are often the most abundant peaks observed in shotgun lipidomics analyses. These ions suppress the signal from low abundance ions and hinder the chances of characterizing low abundant lipids when DDA is used. These issues were circumvented by utilizing gas-phase fractionation, where DDA was performed on narrow m/z ranges instead of a broad m/z range. Employing gas-phase fractionation resulted in an increase in sensitivity by more than an order of magnitude in both positive- and negative-ion modes. Furthermore, the enhanced sensitivity increased the number of lipids identified by a factor of ≈4, and facilitated identification of low abundant lipids from classes such as cardiolipins that are often difficult to observe in untargeted shotgun analyses and require sample-specific preparation steps prior to analysis. This method serves as a resource for comprehensive profiling of lipids from many different categories and classes in an untargeted manner, as well as for targeted and quantitative analyses of individual lipids. Furthermore, this comprehensive analysis of the lipidome can serve as a species- and tissue-specific database for confident identification of other MS-based datasets, such as mass spectrometry imaging.

  11. Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase

    NASA Technical Reports Server (NTRS)

    Lagas, J. J.; Peterka, J. J.; Becker, D. A.

    1977-01-01

    Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.

  12. Performance of FLT-PET for pulmonary lesion diagnosis compared with traditional FDG-PET: A meta-analysis.

    PubMed

    Wang, Zixing; Wang, Yuyan; Sui, Xin; Zhang, Wei; Shi, Ruihong; Zhang, Yingqiang; Dang, Yonghong; Qiao, Zhen; Zhang, Biao; Song, Wei; Jiang, Jingmei

    2015-07-01

    Widely used (18)F 2'-deoxy-2'-fluoro-d-glucose (FDG) positron emission tomography (PET) can be problematic with false positives in cancer imaging. This study aims to investigate the diagnostic accuracy of a candidate PET tracer, (18)F 2',3'-dideoxy-3'-fluoro-2-thiothymidine (FLT), in diagnosing pulmonary lesions compared with FDG. After comprehensive search and study selection, a meta-analysis was performed on data from 548 patients pooled from 17 studies for evaluating FLT accuracy, in which data from 351 patients pooled from ten double-tracer studies was used for direct comparison with FDG. Weighted sensitivity and specificity were used as main indicators of test performance. Individual data was extracted and patient subgroup analyses were performed. Overall, direct comparisons showed lower sensitivity (0.80 vs. 0.89) yet higher specificity (0.82 vs. 0.66) for FLT compared with FDG (both p<0.01). Patient subgroup analysis showed FLT was less sensitive than FDG in detecting lung cancers staged as T1 or T2, and those ≤2.0 cm in diameter (0.81 vs. 0.93, and 0.53 vs. 0.78, respectively, both p<0.05), but was comparable for cancers staged as T3 or T4, and those >2.0 cm in diameter (0.95 vs. 1.00, 0.96 vs. 0.88, both p>0.05). For benignities, FLT performed better compared with FDG in ruling out inflammation-based lesions (0.57 vs. 0.32, p<0.05), and demonstrated greater specificity regardless of lesion sizes. Although FLT cannot replace FDG in detecting small and early lung cancers, it may help to prevent patients with larger or inflammatory lesions from cancer misdiagnosis or even over-treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Sensitivity of the Yo-Yo Intermittent Recovery Test and cardiac autonomic responses to training in futsal players.

    PubMed

    de Freitas, Victor H; Pereira, Lucas A; de Souza, Eberton A; Leicht, Anthony S; Bertollo, Maurizio; Nakamura, Fábio Y

    2015-07-01

    This study examined the sensitivity of maximal (Yo-Yo Intermittent Recovery [IR] 1 and 2) and submaximal (5'-5') tests to identify training adaptations in futsal players along with the suitability of heart-rate (HR) and HR-variability (HRV) measures to identify these adaptations. Eleven male professional futsal players were assessed before (pretraining) and after (posttraining) a 5-wk period. Assessments included 5'-5' and Yo-Yo IR1 and IR2 performances and HR and HRV at rest and during the IR and 5'-5' tests. Magnitude-based-inference analyses examined the differences between pre- and posttraining, while relationships between changes in variables were determined via correlation. Posttraining, Yo-Yo IR1 performance likely increased while Yo-Yo IR2 performance almost certainly increased. Submaximal HR during the Yo-Yo IR1 and Yo-Yo IR2 almost certainly and likely, respectively, decreased with training. HR during the 5'-5' was very likely decreased, while HRV at rest and during the 5'-5' was likely increased after training. Changes in both Yo-Yo IR performances were negatively correlated with changes in HR during the Yo-Yo IR1 test and positively correlated with the change in HRV during the 5'-5'. The current study has identified the Yo-Yo IR2 as more responsive for monitoring training-induced changes of futsal players than the Yo-Yo IR1. Changes in submaximal HR during the Yo-Yo IR and HRV during the 5'-5' were highly sensitive to changes in maximal performance and are recommended for monitoring training. The 5'-5' was recommended as a time-efficient method to assess training adaptations for futsal players.

  14. Discrimination of soft tissues using laser-induced breakdown spectroscopy in combination with k nearest neighbors (kNN) and support vector machine (SVM) classifiers

    NASA Astrophysics Data System (ADS)

    Li, Xiaohui; Yang, Sibo; Fan, Rongwei; Yu, Xin; Chen, Deying

    2018-06-01

    In this paper, discrimination of soft tissues using laser-induced breakdown spectroscopy (LIBS) in combination with multivariate statistical methods is presented. Fresh pork fat, skin, ham, loin and tenderloin muscle tissues are manually cut into slices and ablated using a 1064 nm pulsed Nd:YAG laser. Discrimination analyses between fat, skin and muscle tissues, and further between highly similar ham, loin and tenderloin muscle tissues, are performed based on the LIBS spectra in combination with multivariate statistical methods, including principal component analysis (PCA), k nearest neighbors (kNN) classification, and support vector machine (SVM) classification. Performances of the discrimination models, including accuracy, sensitivity and specificity, are evaluated using 10-fold cross validation. The classification models are optimized to achieve best discrimination performances. The fat, skin and muscle tissues can be definitely discriminated using both kNN and SVM classifiers, with accuracy of over 99.83%, sensitivity of over 0.995 and specificity of over 0.998. The highly similar ham, loin and tenderloin muscle tissues can also be discriminated with acceptable performances. The best performances are achieved with SVM classifier using Gaussian kernel function, with accuracy of 76.84%, sensitivity of over 0.742 and specificity of over 0.869. The results show that the LIBS technique assisted with multivariate statistical methods could be a powerful tool for online discrimination of soft tissues, even for tissues of high similarity, such as muscles from different parts of the animal body. This technique could be used for discrimination of tissues suffering minor clinical changes, thus may advance the diagnosis of early lesions and abnormalities.

  15. Utility of DWI with quantitative ADC values in ovarian tumors: a meta-analysis of diagnostic test performance.

    PubMed

    Pi, Shan; Cao, Rong; Qiang, Jin Wei; Guo, Yan Hui

    2018-01-01

    Background Diffusion-weighted imaging (DWI) and quantitative apparent diffusion coefficient (ADC) values are widely used in the differential diagnosis of ovarian tumors. Purpose To assess the diagnostic performance of quantitative ADC values in ovarian tumors. Material and Methods PubMed, Embase, the Cochrane Library, and local databases were searched for studies assessing ovarian tumors using quantitative ADC values. We quantitatively analyzed the diagnostic performances for two clinical problems: benign vs. malignant tumors and borderline vs. malignant tumors. We evaluated diagnostic performances by the pooled sensitivity and specificity values and by summary receiver operating characteristic (SROC) curves. Subgroup analyses were used to analyze study heterogeneity. Results From the 742 studies identified in the search results, 16 studies met our inclusion criteria. A total of ten studies evaluated malignant vs. benign ovarian tumors and six studies assessed malignant vs. borderline ovarian tumors. Regarding the diagnostic accuracy of quantitative ADC values for distinguishing between malignant and benign ovarian tumors, the pooled sensitivity and specificity values were 0.91 and 0.91, respectively. The area under the SROC curve (AUC) was 0.96. For differentiating borderline from malignant tumors, the pooled sensitivity and specificity values were 0.89 and 0.79, and the AUC was 0.91. The methodological quality of the included studies was moderate. Conclusion Quantitative ADC values could serve as useful preoperative markers for predicting the nature of ovarian tumors. Nevertheless, prospective trials focused on standardized imaging parameters are needed to evaluate the clinical value of quantitative ADC values in ovarian tumors.

  16. Performance evaluation of the new fully automated urine particle analyser UF-5000 compared to the reference method of the Fuchs-Rosenthal chamber.

    PubMed

    Previtali, Giulia; Ravasio, Rudi; Seghezzi, Michela; Buoro, Sabrina; Alessio, Maria Grazia

    2017-09-01

    UF-5000 is the new fully automated urine particle analyser. We validated its performance. 736 urines were analysed and results were compared by two pathologists on uncentrifuged samples, using Fuchs-Rosenthal chamber. AUC of ROC curve ranged between 0.86 and 0.99. Sensitivity was >0.90 for all the elements and similar for RBC and yeasts. Specificity ranged between 0.74 and 0.89 for total cast, epithelial/non-squamous/renal-tubular cells and RBC. For all the other parameters specificity was >0.90. Comparison with Fuchs-Rosenthal chamber was very good for all the parameters; r ranged between 0.52 and 0.99 except for pathological cast because of the lack of the pathological samples in medium and higher ranges. Linearity performance (R 2 ) was 1.00, 1.00 and 0.99 respectively for RBC, WBC and epithelial cells (EC). No carry-over was observed. The within-run imprecision was 25.42%,13.81%,1.36% for RBC; 37.50%,10.16%,1.41% for WBC and 35.25%, 17.85%,6.30% for EC at low, near the cut off level and high concentrations, respectively. The between-run imprecision was 6.90%,1.60% for RBC, 4.10%,1.90% for WBC and 7.60%,7.30% for EC, using low and high positive quality controls, respectively. UF-5000 is an analyser of great interest to detect urine particle related to pathological process of kidney and urinary tract. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. The Negative Affect Hypothesis of Noise Sensitivity

    PubMed Central

    Shepherd, Daniel; Heinonen-Guzejev, Marja; Heikkilä, Kauko; Dirks, Kim N.; Hautus, Michael J.; Welch, David; McBride, David

    2015-01-01

    Some studies indicate that noise sensitivity is explained by negative affect, a dispositional tendency to negatively evaluate situations and the self. Individuals high in such traits may report a greater sensitivity to other sensory stimuli, such as smell, bright light and pain. However, research investigating the relationship between noise sensitivity and sensitivity to stimuli associated with other sensory modalities has not always supported the notion of a common underlying trait, such as negative affect, driving them. Additionally, other explanations of noise sensitivity based on cognitive processes have existed in the clinical literature for over 50 years. Here, we report on secondary analyses of pre-existing laboratory (n = 74) and epidemiological (n = 1005) data focusing on the relationship between noise sensitivity to and annoyance with a variety of olfactory-related stimuli. In the first study a correlational design examined the relationships between noise sensitivity, noise annoyance, and perceptual ratings of 16 odors. The second study sought differences between mean noise and air pollution annoyance scores across noise sensitivity categories. Results from both analyses failed to support the notion that, by itself, negative affectivity explains sensitivity to noise. PMID:25993104

  18. Modeling the atmospheric chemistry of TICs

    NASA Astrophysics Data System (ADS)

    Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John

    2009-05-01

    An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.

  19. Development of a rapid, simple and sensitive HPLC-FLD method for determination of rhodamine B in chili-containing products.

    PubMed

    Qi, Ping; Lin, Zhihao; Li, Jiaxu; Wang, ChengLong; Meng, WeiWei; Hong, Hong; Zhang, Xuewu

    2014-12-01

    In this work, a simple, rapid and sensitive analytical method for the determination of rhodamine B in chili-containing foodstuffs is described. The dye is extracted from samples with methanol and analysed without further cleanup procedure by high-performance liquid chromatography (HPLC) coupled to fluorescence detection (FLD). The influence of matrix fluorescent compounds (capsaicin and dihydrocapsaicin) on the analysis was overcome by the optimisation of mobile-phase composition. The limit of determination (LOD) and limit of quantification (LOQ) were 3.7 and 10 μg/kg, respectively. Validation data show a good repeatability and within-lab reproducibility with relative standard deviations <10%. The overall recoveries are in the range of 98-103% in chili powder and in the range of 87-100% in chili oil depending on the concentration of rhodamine B in foodstuffs. This method is suitable for the routine analysis of rhodamine B due to its sensitivity, simplicity, reasonable time and cost. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Mind matters: A meta-analysis on parental mentalization and sensitivity as predictors of infant-parent attachment.

    PubMed

    Zeegers, Moniek A J; Colonnesi, Cristina; Stams, Geert-Jan J M; Meins, Elizabeth

    2017-12-01

    Major developments in attachment research over the past 2 decades have introduced parental mentalization as a predictor of infant-parent attachment security. Parental mentalization is the degree to which parents show frequent, coherent, or appropriate appreciation of their infants' internal states. The present study examined the triangular relations between parental mentalization, parental sensitivity, and attachment security. A total of 20 effect sizes (N = 974) on the relation between parental mentalization and attachment, 82 effect sizes (N = 6,664) on the relation between sensitivity and attachment, and 24 effect sizes (N = 2,029) on the relation between mentalization and sensitivity were subjected to multilevel meta-analyses. The results showed a pooled correlation of r = .30 between parental mentalization and infant attachment security, and rs of .25 for the correlations between sensitivity and attachment security, and between parental mentalization and sensitivity. A meta-analytic structural equation model was performed to examine the combined effects of mentalization and sensitivity as predictors of infant attachment. Together, the predictors explained 12% of the variance in attachment security. After controlling for the effect of sensitivity, the relation between parental mentalization and attachment remained, r = .24; the relation between sensitivity and attachment remained after controlling for parental mentalization, r = .19. Sensitivity also mediated the relation between parental mentalization and attachment security, r = .07, suggesting that mentalization exerts both direct and indirect influences on attachment security. The results imply that parental mentalization should be incorporated into existing models that map the predictors of infant-parent attachment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Assessment of Nutrition Competency of Graduating Agriculture Students in Ethiopia: A Cross-sectional Study.

    PubMed

    Abebe, Mesfin G; Tariku, Mebit K; Yitaferu, Tadele B; Shiferaw, Ephrem D; Desta, Firew A; Yimer, Endris M; Akassa, Kefyalew M; Thompson, Elizabeth C

    2017-04-01

    To assess the level of nutrition-sensitive agriculture competencies of graduating midlevel animal and plant sciences students in Ethiopia and identify factors associated with the attainment of competencies. A cross-sectional study design using structured skills observation checklists, objective written questions, and structured questionnaires was employed. Two agriculture technical vocational education and training colleges in the 2 regions of Ethiopia. A total of 145 students were selected using stratified random sampling techniques from a population of 808 students with the response rate of 93%. Nutrition-sensitive agriculture competency (knowledge and skills attributes) of graduating students. Bivariate and multivariable statistical analyses were used to examine the association between the variables of students' gender, age, department, institutional ownership, and perception of learning environment and their performance in nutrition competency. Combined scores showed that 49% of students demonstrated mastery of nutrition competencies. Gender and institutional ownership were associated with the performance of students (P < .001); male students and students at a federal institution performed better. The study showed low performance of students in nutrition competency and suggested the need for strengthening the curriculum, building tutors' capacity, and providing additional support to female students and regional colleges. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  2. Development of techniques for the analysis of isoflavones in soy foods and nutraceuticals.

    PubMed

    Dentith, Susan; Lockwood, Brian

    2008-05-01

    For over 20 years, soy isoflavones have been investigated for their ability to prevent a wide range of cancers and cardiovascular problems, and numerous other disease states. This research is underpinned by the ability of researchers to analyse isoflavones in various forms in a range of raw materials and biological fluids. This review summarizes the techniques recently used in their analysis. The speed of high performance liquid chromatography analysis has been improved, allowing analysis of more samples, and increasing the sensitivity of detection techniques allows quantification of isoflavones down to nanomoles per litre levels in biological fluids. The combination of high-performance liquid chromatography with immunoassay has allowed identification and estimation of low-level soy isoflavones. The use of soy isoflavone supplements has shown an increase in their circulating levels in plasma and urine, aiding investigation of their biological effects. The significance of the metabolite equol has spurned research into new areas, and recently the specific enantiomers have been studied. High-performance liquid chromatography, capillary electrophoresis and gas chromatography are widely used with a range of detection systems. Increasingly, immunoassay is being used because of its high sensitivity and low cost.

  3. Sensorimotor and postural control factors associated with driving safety in a community-dwelling older driver population.

    PubMed

    Lacherez, Philippe; Wood, Joanne M; Anstey, Kaarin J; Lord, Stephen R

    2014-02-01

    To establish whether sensorimotor function and balance are associated with on-road driving performance in older adults. The performance of 270 community-living adults aged 70-88 years recruited via the electoral roll was measured on a battery of peripheral sensation, strength, flexibility, reaction time, and balance tests and on a standardized measure of on-road driving performance. Forty-seven participants (17.4%) were classified as unsafe based on their driving assessment. Unsafe driving was associated with reduced peripheral sensation, lower limb weakness, reduced neck range of motion, slow reaction time, and poor balance in univariate analyses. Multivariate logistic regression analysis identified poor vibration sensitivity, reduced quadriceps strength, and increased sway on a foam surface with eyes closed as significant and independent risk factors for unsafe driving. These variables classified participants into safe and unsafe drivers with a sensitivity of 74% and specificity of 70%. A number of sensorimotor and balance measures were associated with driver safety and the multivariate model comprising measures of sensation, strength, and balance was highly predictive of unsafe driving in this sample. These findings highlight important determinants of driver safety and may assist in developing efficacious driver safety strategies for older drivers.

  4. Mapping the structure of perceptual and visual-motor abilities in healthy young adults.

    PubMed

    Wang, Lingling; Krasich, Kristina; Bel-Bahar, Tarik; Hughes, Lauren; Mitroff, Stephen R; Appelbaum, L Gregory

    2015-05-01

    The ability to quickly detect and respond to visual stimuli in the environment is critical to many human activities. While such perceptual and visual-motor skills are important in a myriad of contexts, considerable variability exists between individuals in these abilities. To better understand the sources of this variability, we assessed perceptual and visual-motor skills in a large sample of 230 healthy individuals via the Nike SPARQ Sensory Station, and compared variability in their behavioral performance to demographic, state, sleep and consumption characteristics. Dimension reduction and regression analyses indicated three underlying factors: Visual-Motor Control, Visual Sensitivity, and Eye Quickness, which accounted for roughly half of the overall population variance in performance on this battery. Inter-individual variability in Visual-Motor Control was correlated with gender and circadian patters such that performance on this factor was better for males and for those who had been awake for a longer period of time before assessment. The current findings indicate that abilities involving coordinated hand movements in response to stimuli are subject to greater individual variability, while visual sensitivity and occulomotor control are largely stable across individuals. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Economic Evaluation of Lipid-Lowering Therapy in the Secondary Prevention Setting in the Philippines.

    PubMed

    Tumanan-Mendoza, Bernadette A; Mendoza, Victor L

    2013-05-01

    To determine the cost-effectiveness of lipid-lowering therapy in the secondary prevention of cardiovascular events in the Philippines. A cost-utility analysis was performed by using Markov modeling in the secondary prevention setting. The models incorporated efficacy of lipid-lowering therapy demonstrated in randomized controlled trials and mortality rates obtained from local life tables. Average and incremental cost-effectiveness ratios were obtained for simvastatin, atorvastatin, pravastatin, and gemfibrozil. The costs of the following were included: medications, laboratory examinations, consultation and related expenses, and production losses. The costs were expressed in current or nominal prices as of the first quarter of 2010 (Philippine peso). Utility was expressed in quality-adjusted life-years gained. Sensitivity analyses were performed by using variations in the cost centers, discount rates, starting age, and differences in utility weights for stroke. In the analysis using the lower-priced generic counterparts, therapy using 40 mg simvastatin daily was the most cost-effective option compared with the other therapies, while pravastatin 40 mg daily was the most cost-effective alternative if the higher-priced innovator drugs were used. In all sensitivity analyses, gemfibrozil was strongly dominated by the statins. In secondary prevention, simvastatin or pravastatin were the most cost-effective options compared with atorvastatin and gemfibrozil in the Philippines. Gemfibrozil was strongly dominated by the statins. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Benign and malignant skull-involved lesions: discriminative value of conventional CT and MRI combined with diffusion-weighted MRI.

    PubMed

    Tu, Zhanhai; Xiao, Zebin; Zheng, Yingyan; Huang, Hongjie; Yang, Libin; Cao, Dairong

    2018-01-01

    Background Little is known about the value of computed tomography (CT) and magnetic resonance imaging (MRI) combined with diffusion-weighted imaging (DWI) in distinguishing malignant from benign skull-involved lesions. Purpose To evaluate the discriminative value of DWI combined with conventional CT and MRI for differentiating between benign and malignant skull-involved lesions. Material and Methods CT and MRI findings of 58 patients with pathologically proven skull-involved lesions (43 benign and 15 malignant) were retrospectively reviewed. Conventional CT and MRI characteristics and apparent diffusion coefficient (ADC) value of the two groups were evaluated and compared. Multivariate logistic regression and receiver operating characteristic (ROC) curve analyses were performed to assess the differential performance of each parameter separately and together. Results The presence of cortical defects or break-through and ill-defined margins were associated with malignant skull-involved lesions (both P < 0.05). Malignant skull-involved lesions demonstrated a significantly lower ADC ( P = 0.016) than benign lesions. ROC curve analyses indicated that a combination of CT, MRI, and DWI with an ADC ≤ 0.703 × 10 -3 mm 2 /s showed optimal sensitivity, while DWI along showed optimal specificity of 88.4% in differentiating between benign and malignant skull-involved lesions. Conclusion The combination of CT, MRI, and DWI can help to differentiate malignant from benign skull-involved lesions. CT + MRI + DWI offers optimal sensitivity, while DWI offers optimal specificity.

  7. Population-based cohort study on comparative effectiveness and safety of biologics in inflammatory bowel disease

    PubMed Central

    Trotta, Francesco; Cascini, Silvia; Agabiti, Nera; Kohn, Anna; Gasbarrini, Antonio; Davoli, Marina; Addis, Antonio

    2018-01-01

    Background The comparison of effectiveness and safety of anti-tumor necrosis factor-alpha agents for the treatment of inflammatory bowel disease (IBD) is relevant for clinical practice and stakeholders. Objective The objective of this study was to compare the risk of abdominal surgery, steroid utilization, and hospitalization for infection in Crohn’s disease (CD) or ulcerative colitis (UC) patients newly treated with infliximab (IFX) or adalimumab (ADA). Methods A retrospective population-based cohort study was performed using health information systems data from Lazio region, Italy. Patients with CD or UC diagnosis were enrolled at first prescription of IFX or ADA during 2008–2014 (index date). Only new drug users were followed for 2 years from the index date. IFX versus ADA adjusted hazard ratios were calculated applying “intention-to-treat” approach, controlling for several characteristics and stratifying the analysis on steroid use according to previous drug utilization. Sensitivity analyses were performed according to “as-treated” approach, adjusting for propensity score, censoring at switching or discontinuation, and evaluating different lengths of follow-up periods. Results We enrolled 1,432 IBD patients (42% and 83% exposed to IFX for CD and UC, respectively). In both diseases, treatment effects did not differ in any outcome considered, and sensitivity analyses confirmed the results from the main analysis. Conclusion In our population-based cohort study, effectiveness and safety data in new users of ADA or IFX with CD or UC were comparable for the outcomes we tested. PMID:29440933

  8. Predicted effect size of lisdexamfetamine treatment of attention deficit/hyperactivity disorder (ADHD) in European adults: Estimates based on indirect analysis using a systematic review and meta-regression analysis.

    PubMed

    Fridman, M; Hodgkins, P S; Kahle, J S; Erder, M H

    2015-06-01

    There are few approved therapies for adults with attention-deficit/hyperactivity disorder (ADHD) in Europe. Lisdexamfetamine (LDX) is an effective treatment for ADHD; however, no clinical trials examining the efficacy of LDX specifically in European adults have been conducted. Therefore, to estimate the efficacy of LDX in European adults we performed a meta-regression of existing clinical data. A systematic review identified US- and Europe-based randomized efficacy trials of LDX, atomoxetine (ATX), or osmotic-release oral system methylphenidate (OROS-MPH) in children/adolescents and adults. A meta-regression model was then fitted to the published/calculated effect sizes (Cohen's d) using medication, geographical location, and age group as predictors. The LDX effect size in European adults was extrapolated from the fitted model. Sensitivity analyses performed included using adult-only studies and adding studies with placebo designs other than a standard pill-placebo design. Twenty-two of 2832 identified articles met inclusion criteria. The model-estimated effect size of LDX for European adults was 1.070 (95% confidence interval: 0.738, 1.401), larger than the 0.8 threshold for large effect sizes. The overall model fit was adequate (80%) and stable in the sensitivity analyses. This model predicts that LDX may have a large treatment effect size in European adults with ADHD. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  9. Correlating Mineralogy and Amino Acid Contents of Milligram-Scale Murchison Carbonaceous Chondrite Samples

    NASA Technical Reports Server (NTRS)

    Burton, Aaron, S.; Berger, Eve L.; Locke, Darren R.; Elsila, Jamie E.; Glavin, Daniel P.; Dworkin, Jason P.

    2015-01-01

    Amino acids, the building blocks of proteins, have been found to be indigenous in most of the carbonaceous chondrite groups. The abundances of amino acids, as well as their structural, enantiomeric and isotopic compositions differ significantly among meteorites of different groups and petrologic types. This suggests that there is a link between parent-body conditions, mineralogy and the synthesis and preservation of amino acids (and likely other organic molecules). However, elucidating specific causes for the observed differences in amino acid composition has proven extremely challenging because samples analyzed for amino acids are typically much larger ((is) approximately 100 mg powders) than the scale at which meteorite heterogeneity is observed (sub mm-scale differences, (is) approximately 1-mg or smaller samples). Thus, the effects of differences in mineralogy on amino acid abundances could not be easily discerned. Recent advances in the sensitivity of instrumentation have made possible the analysis of smaller samples for amino acids, enabling a new approach to investigate the link between mineralogical con-text and amino acid compositions/abundances in meteorites. Through coordinated mineral separation, mineral characterization and highly sensitive amino acid analyses, we have performed preliminary investigations into the relationship between meteorite mineralogy and amino acid composition. By linking amino acid data to mineralogy, we have started to identify amino acid-bearing mineral phases in different carbonaceous meteorites. The methodology and results of analyses performed on the Murchison meteorite are presented here.

  10. A cost-effectiveness analysis of propofol versus midazolam for procedural sedation in the emergency department.

    PubMed

    Hohl, Corinne Michèle; Nosyk, Bohdan; Sadatsafavi, Mohsen; Anis, Aslam Hayat

    2008-01-01

    To determine the incremental cost-effectiveness of using propofol versus midazolam for procedural sedation (PS) in adults in the emergency department (ED). The authors conducted a cost-effectiveness analysis from the perspective of the health care provider. The primary outcome was the incremental cost (or savings) to achieve one additional successful sedation with propofol compared to midazolam. A decision model was developed in which the clinical effectiveness and cost of a PS strategy using either agent was estimated. The authors derived estimates of clinical effectiveness and risk of adverse events (AEs) from a systematic review. The cost of each clinical outcome was determined by incorporating the baseline cost of the ED visit, the cost of the drug, the cost of labor of physicians and nurses, the cost and probability of an AE, and the cost and probability of a PS failure. A standard meta-analytic technique was used to calculate the weighted mean difference in recovery times and obtain mean drug doses from patient-level data from a randomized controlled trial. Probabilistic sensitivity analyses were conducted to examine the uncertainty around the estimated incremental cost-effectiveness ratio using Monte Carlo simulation. Choosing a sedation strategy with propofol resulted in average savings of $17.33 (95% confidence interval [CI] = $24.13 to $10.44) per sedation performed. This resulted in an incremental cost-effectiveness ratio of -$597.03 (95% credibility interval -$6,434.03 to $6,113.57) indicating savings of $597.03 per additional successful sedation performed with propofol. This result was driven by shorter recovery times and was robust to all sensitivity analyses performed. These results indicate that using propofol for PS in the ED is a cost-saving strategy.

  11. Developmental fluoride neurotoxicity: a systematic review and meta-analysis.

    PubMed

    Choi, Anna L; Sun, Guifan; Zhang, Ying; Grandjean, Philippe

    2012-10-01

    Although fluoride may cause neurotoxicity in animal models and acute fluoride poisoning causes neurotoxicity in adults, very little is known of its effects on children's neurodevelopment. We performed a systematic review and meta-analysis of published studies to investigate the effects of increased fluoride exposure and delayed neurobehavioral development. We searched the MEDLINE, EMBASE, Water Resources Abstracts, and TOXNET databases through 2011 for eligible studies. We also searched the China National Knowledge Infrastructure (CNKI) database, because many studies on fluoride neurotoxicity have been published in Chinese journals only. In total, we identified 27 eligible epidemiological studies with high and reference exposures, end points of IQ scores, or related cognitive function measures with means and variances for the two exposure groups. Using random-effects models, we estimated the standardized mean difference between exposed and reference groups across all studies. We conducted sensitivity analyses restricted to studies using the same outcome assessment and having drinking-water fluoride as the only exposure. We performed the Cochran test for heterogeneity between studies, Begg's funnel plot, and Egger test to assess publication bias, and conducted meta-regressions to explore sources of variation in mean differences among the studies. The standardized weighted mean difference in IQ score between exposed and reference populations was -0.45 (95% confidence interval: -0.56, -0.35) using a random-effects model. Thus, children in high-fluoride areas had significantly lower IQ scores than those who lived in low-fluoride areas. Subgroup and sensitivity analyses also indicated inverse associations, although the substantial heterogeneity did not appear to decrease. The results support the possibility of an adverse effect of high fluoride exposure on children's neurodevelopment. Future research should include detailed individual-level information on prenatal exposure, neurobehavioral performance, and covariates for adjustment.

  12. Estimating the cost of unclaimed electronic prescriptions at an independent pharmacy.

    PubMed

    Doucette, William R; Connolly, Connie; Al-Jumaili, Ali Azeez

    2016-01-01

    The increasing rate of e-prescribing is associated with a significant number of unclaimed prescriptions. The costs of unclaimed e-prescriptions could create an unwanted burden on community pharmacy practices. The objective of this study was to calculate the rate and costs of filled but unclaimed e-prescriptions at an independent pharmacy. This study was performed at a rural independent pharmacy in a Midwestern state. The rate and costs of the unclaimed e-prescriptions were determined by collecting information about all unclaimed e-prescriptions for a 6-month period from August 2013 to January 2014. The costs of unclaimed prescriptions included those expenses incurred to prepare the prescription, contact the patient, and return the unclaimed prescription to inventory. Two sensitivity analyses were conducted. The total cost of 147 unclaimed e-prescriptions equaled $3,677.70 for the study period. Thus, the monthly cost of unclaimed e-prescriptions was $612.92 and the average cost of each unclaimed prescription was $25.02. The sensitivity analyses showed that using a technician to perform prescription return tasks reduced average costs to $19.33 and that using a state Medicaid cost of dispensing resulted in average costs of $18.54 per prescription. The rate of unclaimed e-prescriptions was 0.82%. The percentage of unclaimed e-prescriptions in this pharmacy was less than 1%. In addition to increased cost, unclaimed e-prescriptions add inefficiency to the work flow of the pharmacy staff, which can limit the time that they are available for performing revenue-generating activities. Adjustments to work flow and insurer policies could help to reduce the burden of unclaimed e-prescriptions. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  13. Variation in learning curves and competence for ERCP among advanced endoscopy trainees by using cumulative sum analysis.

    PubMed

    Wani, Sachin; Hall, Matthew; Wang, Andrew Y; DiMaio, Christopher J; Muthusamy, V Raman; Keswani, Rajesh N; Brauer, Brian C; Easler, Jeffrey J; Yen, Roy D; El Hajj, Ihab; Fukami, Norio; Ghassemi, Kourosh F; Gonzalez, Susana; Hosford, Lindsay; Hollander, Thomas G; Wilson, Robert; Kushnir, Vladimir M; Ahmad, Jawad; Murad, Faris; Prabhu, Anoop; Watson, Rabindra R; Strand, Daniel S; Amateau, Stuart K; Attwell, Augustin; Shah, Raj J; Early, Dayna; Edmundowicz, Steven A; Mullady, Daniel

    2016-04-01

    There are limited data on learning curves and competence in ERCP. By using a standardized data collection tool, we aimed to prospectively define learning curves and measure competence among advanced endoscopy trainees (AETs) by using cumulative sum (CUSUM) analysis. AETs were evaluated by attending endoscopists starting with the 26th hands-on ERCP examination and then every ERCP examination during the 12-month training period. A standardized ERCP competency assessment tool (using a 4-point scoring system) was used to grade the examination. CUSUM analysis was applied to produce learning curves for individual technical and cognitive components of ERCP performance (success defined as a score of 1, acceptable and unacceptable failures [p1] of 10% and 20%, respectively). Sensitivity analyses varying p1 and by using a less-stringent definition of success were performed. Five AETs were included with a total of 1049 graded ERCPs (mean ± SD, 209.8 ± 91.6/AET). The majority of cases were performed for a biliary indication (80%). The overall and native papilla allowed cannulation times were 3.1 ± 3.6 and 5.7 ± 4, respectively. Overall learning curves demonstrated substantial variability for individual technical and cognitive endpoints. Although nearly all AETs achieved competence in overall cannulation, none achieved competence for cannulation in cases with a native papilla. Sensitivity analyses increased the proportion of AETs who achieved competence. This study demonstrates that there is substantial variability in ERCP learning curves among AETs. A specific case volume does not ensure competence, especially for native papilla cannulation. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  14. Conceptual Launch Vehicle and Spacecraft Design for Risk Assessment

    NASA Technical Reports Server (NTRS)

    Motiwala, Samira A.; Mathias, Donovan L.; Mattenberger, Christopher J.

    2014-01-01

    One of the most challenging aspects of developing human space launch and exploration systems is minimizing and mitigating the many potential risk factors to ensure the safest possible design while also meeting the required cost, weight, and performance criteria. In order to accomplish this, effective risk analyses and trade studies are needed to identify key risk drivers, dependencies, and sensitivities as the design evolves. The Engineering Risk Assessment (ERA) team at NASA Ames Research Center (ARC) develops advanced risk analysis approaches, models, and tools to provide such meaningful risk and reliability data throughout vehicle development. The goal of the project presented in this memorandum is to design a generic launch 7 vehicle and spacecraft architecture that can be used to develop and demonstrate these new risk analysis techniques without relying on other proprietary or sensitive vehicle designs. To accomplish this, initial spacecraft and launch vehicle (LV) designs were established using historical sizing relationships for a mission delivering four crewmembers and equipment to the International Space Station (ISS). Mass-estimating relationships (MERs) were used to size the crew capsule and launch vehicle, and a combination of optimization techniques and iterative design processes were employed to determine a possible two-stage-to-orbit (TSTO) launch trajectory into a 350-kilometer orbit. Primary subsystems were also designed for the crewed capsule architecture, based on a 24-hour on-orbit mission with a 7-day contingency. Safety analysis was also performed to identify major risks to crew survivability and assess the system's overall reliability. These procedures and analyses validate that the architecture's basic design and performance are reasonable to be used for risk trade studies. While the vehicle designs presented are not intended to represent a viable architecture, they will provide a valuable initial platform for developing and demonstrating innovative risk assessment capabilities.

  15. Validation of the Mass-Extraction-Window for Quantitative Methods Using Liquid Chromatography High Resolution Mass Spectrometry.

    PubMed

    Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand

    2016-03-15

    A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.

  16. Sensitivity analysis of the near-road dispersion model RLINE - An evaluation at Detroit, Michigan

    NASA Astrophysics Data System (ADS)

    Milando, Chad W.; Batterman, Stuart A.

    2018-05-01

    The development of accurate and appropriate exposure metrics for health effect studies of traffic-related air pollutants (TRAPs) remains challenging and important given that traffic has become the dominant urban exposure source and that exposure estimates can affect estimates of associated health risk. Exposure estimates obtained using dispersion models can overcome many of the limitations of monitoring data, and such estimates have been used in several recent health studies. This study examines the sensitivity of exposure estimates produced by dispersion models to meteorological, emission and traffic allocation inputs, focusing on applications to health studies examining near-road exposures to TRAP. Daily average concentrations of CO and NOx predicted using the Research Line source model (RLINE) and a spatially and temporally resolved mobile source emissions inventory are compared to ambient measurements at near-road monitoring sites in Detroit, MI, and are used to assess the potential for exposure measurement error in cohort and population-based studies. Sensitivity of exposure estimates is assessed by comparing nominal and alternative model inputs using statistical performance evaluation metrics and three sets of receptors. The analysis shows considerable sensitivity to meteorological inputs; generally the best performance was obtained using data specific to each monitoring site. An updated emission factor database provided some improvement, particularly at near-road sites, while the use of site-specific diurnal traffic allocations did not improve performance compared to simpler default profiles. Overall, this study highlights the need for appropriate inputs, especially meteorological inputs, to dispersion models aimed at estimating near-road concentrations of TRAPs. It also highlights the potential for systematic biases that might affect analyses that use concentration predictions as exposure measures in health studies.

  17. Refractive index-based detection of gradient elution liquid chromatography using chip-integrated microring resonator arrays.

    PubMed

    Wade, James H; Bailey, Ryan C

    2014-01-07

    Refractive index-based sensors offer attractive characteristics as nondestructive and universal detectors for liquid chromatographic separations, but a small dynamic range and sensitivity to minor thermal perturbations limit the utility of commercial RI detectors for many potential applications, especially those requiring the use of gradient elutions. As such, RI detectors find use almost exclusively in sample abundant, isocratic separations when interfaced with high-performance liquid chromatography. Silicon photonic microring resonators are refractive index-sensitive optical devices that feature good sensitivity and tremendous dynamic range. The large dynamic range of microring resonators allows the sensors to function across a wide spectrum of refractive indices, such as that encountered when moving from an aqueous to organic mobile phase during a gradient elution, a key analytical advantage not supported in commercial RI detectors. Microrings are easily configured into sensor arrays, and chip-integrated control microrings enable real-time corrections of thermal drift. Thermal controls allow for analyses at any temperature and, in the absence of rigorous temperature control, obviates extended detector equilibration wait times. Herein, proof of concept isocratic and gradient elution separations were performed using well-characterized model analytes (e.g., caffeine, ibuprofen) in both neat buffer and more complex sample matrices. These experiments demonstrate the ability of microring arrays to perform isocratic and gradient elutions under ambient conditions, avoiding two major limitations of commercial RI-based detectors and maintaining comparable bulk RI sensitivity. Further benefit may be realized in the future through selective surface functionalization to impart degrees of postcolumn (bio)molecular specificity at the detection phase of a separation. The chip-based and microscale nature of microring resonators also make it an attractive potential detection technology that could be integrated within lab-on-a-chip and microfluidic separation devices.

  18. The performance characteristics of prostate-specific antigen and prostate-specific antigen density in Chinese men.

    PubMed

    Teoh, Jeremy Yc; Yuen, Steffi Kk; Tsu, James Hl; Wong, Charles Kw; Ho, Brian Sh; Ng, Ada Tl; Ma, Wai-Kit; Ho, Kwan-Lun; Yiu, Ming-Kwong

    2017-01-01

    We investigated the performance characteristics of prostate-specific antigen (PSA) and PSA density (PSAD) in Chinese men. All Chinese men who underwent transrectal ultrasound-guided prostate biopsy (TRUS-PB) from year 2000 to 2013 were included. The receiver operating characteristic (ROC) curves for both PSA and PSAD were analyzed. The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) at different cut-off levels were calculated. A total of 2606 Chinese men were included. For the ROC, the area under curve was 0.770 for PSA (P < 0.001) and 0.823 for PSAD (P < 0.001). PSA of 4.5 ng ml-1 had sensitivity of 94.4%, specificity of 14.1%, PPV of 29.5%, and NPV of 86.9%; PSAD of 0.12 ng ml-1 cc-1 had sensitivity of 94.5%, specificity of 26.6%, PPV of 32.8%, and NPV of 92.7%. On multivariate logistic regression analyses, PSA cut-off at 4.5 ng ml-1 (OR 1.61, 95% CI 1.05-2.45, P= 0.029) and PSAD cut-off at 0.12 ng ml-1 cc-1 (OR 6.22, 95% CI 4.20-9.22, P< 0.001) were significant predictors for prostate cancer detection on TRUS-PB. In conclusion, the performances of PSA and PSAD at different cut-off levels in Chinese men were very different from those in Caucasians. PSA of 4.5 ng ml-1 and PSAD of 0.12 ng ml-1 cc-1 had near 95% sensitivity and were significant predictors of prostate cancer detection in Chinese men.

  19. Competing risk models in reliability systems, a weibull distribution model with bayesian analysis approach

    NASA Astrophysics Data System (ADS)

    Iskandar, Ismed; Satria Gondokaryono, Yudi

    2016-02-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.

  20. Experimental study on structural, optoelectronic and room temperature sensing performance of Nickel doped ZnO based ethanol sensors

    NASA Astrophysics Data System (ADS)

    Sudha, M.; Radha, S.; Kirubaveni, S.; Kiruthika, R.; Govindaraj, R.; Santhosh, N.

    2018-04-01

    Nano crystalline undoped (1Z) Zinc Oxide (ZnO) and 5, 10 and 15 Wt. % (1ZN, 2ZN and 3ZN) of Nickel doped ZnO based sensors were fabricated using the hydrothermal approach on Fluorine doped Tin Oxide (FTO) glass substrates. X-ray diffraction (XRD) analysis proved the hexagonal Wurtzite structure of ZnO. Parametric variations in terms of dislocation density, bond length, lattice parameters and micro strain with respect to dopant concentration were analysed. The prominent variations in the crystallite size, optical band gap and Photoluminescence peak ratio of devices fabricated was observed. The Field Emission Scanning Electron Microscope (FESEM) images showed a change in diameter and density of the nanorods. The effect of the operating temperature, concentration of ethanol and the different doping levels of sensitivity, response and recovery time were investigated. It was inferred that 376% of sensitivity with a very quick response and recovery time of <5 s and 10 s respectively at 150 °C of 3ZN sensor has better performance compared to other three sensors. Also 3ZN sensor showed improved sensitivity of 114%, even at room temperature with response and recovery time of 35 s and 45 s respectively.

Top