Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
DOT National Transportation Integrated Search
2012-01-01
OVERVIEW OF PRESENTATION : Evaluation Parameters : EPAs Sensitivity Analysis : Comparison to Baseline Case : MOVES Sensitivity Run Specification : MOVES Sensitivity Input Parameters : Results : Uses of Study
NASA Astrophysics Data System (ADS)
Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.
2018-04-01
Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.
Analysis and design of optical systems by use of sensitivity analysis of skew ray tracing
NASA Astrophysics Data System (ADS)
Lin, Psang Dain; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat's eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Analysis and Design of Optical Systems by Use of Sensitivity Analysis of Skew Ray Tracing
NASA Astrophysics Data System (ADS)
Dain Lin, Psang; Lu, Chia-Hung
2004-02-01
Optical systems are conventionally evaluated by ray-tracing techniques that extract performance quantities such as aberration and spot size. Current optical analysis software does not provide satisfactory analytical evaluation functions for the sensitivity of an optical system. Furthermore, when functions oscillate strongly, the results are of low accuracy. Thus this work extends our earlier research on an advanced treatment of reflected or refracted rays, referred to as sensitivity analysis, in which differential changes of reflected or refracted rays are expressed in terms of differential changes of incident rays. The proposed sensitivity analysis methodology for skew ray tracing of reflected or refracted rays that cross spherical or flat boundaries is demonstrated and validated by the application of a cat ?s eye retroreflector to the design and by the image orientation of a system with noncoplanar optical axes. The proposed sensitivity analysis is projected as the nucleus of other geometrical optical computations.
Netlist Oriented Sensitivity Evaluation (NOSE)
2017-03-01
developing methodologies to assess sensitivities of alternative chip design netlist implementations. The research is somewhat foundational in that such...Netlist-Oriented Sensitivity Evaluation (NOSE) project was to develop methodologies to assess sensitivities of alternative chip design netlist...analysis to devise a methodology for scoring the sensitivity of circuit nodes in a netlist and thus providing the raw data for any meaningful
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.
2009-01-01
We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.
Estimating Sobol Sensitivity Indices Using Correlations
Sensitivity analysis is a crucial tool in the development and evaluation of complex mathematical models. Sobol's method is a variance-based global sensitivity analysis technique that has been applied to computational models to assess the relative importance of input parameters on...
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
ERD’s Supercomputer for Model Uncertainty and Sensitivity Evaluation (SuperMUSE) is a key to enhancing quality assurance in environmental models and applications. Uncertainty analysis and sensitivity analysis remain critical, though often overlooked steps in the development and e...
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Antón, Alfonso; Pazos, Marta; Martín, Belén; Navero, José Manuel; Ayala, Miriam Eleonora; Castany, Marta; Martínez, Patricia; Bardavío, Javier
2013-01-01
To assess sensitivity, specificity, and agreement among automated event analysis, automated trend analysis, and expert evaluation to detect glaucoma progression. This was a prospective study that included 37 eyes with a follow-up of 36 months. All had glaucomatous disks and fields and performed reliable visual fields every 6 months. Each series of fields was assessed with 3 different methods: subjective assessment by 2 independent teams of glaucoma experts, glaucoma/guided progression analysis (GPA) event analysis, and GPA (visual field index-based) trend analysis. Kappa agreement coefficient between methods and sensitivity and specificity for each method using expert opinion as gold standard were calculated. The incidence of glaucoma progression was 16% to 18% in 3 years but only 3 cases showed progression with all 3 methods. Kappa agreement coefficient was high (k=0.82) between subjective expert assessment and GPA event analysis, and only moderate between these two and GPA trend analysis (k=0.57). Sensitivity and specificity for GPA event and GPA trend analysis were 71% and 96%, and 57% and 93%, respectively. The 3 methods detected similar numbers of progressing cases. The GPA event analysis and expert subjective assessment showed high agreement between them and moderate agreement with GPA trend analysis. In a period of 3 years, both methods of GPA analysis offered high specificity, event analysis showed 83% sensitivity, and trend analysis had a 66% sensitivity.
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-31
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
Hoffmann, Max J; Engelmann, Felix; Matera, Sebastian
2017-01-28
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO 2 (110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past the application of sensitivity analysis, such as Degree ofmore » Rate Control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. Here in this study we present an efficient and robust three stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using CO oxidation on RuO 2(110) as a prototypical reaction. In a first step, we utilize the Fisher Information Matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally we adopt a method for sampling coupled finite differences for evaluating the sensitivity measure of lattice based models. This allows efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano scale design of heterogeneous catalysts.« less
NASA Astrophysics Data System (ADS)
Hoffmann, Max J.; Engelmann, Felix; Matera, Sebastian
2017-01-01
Lattice kinetic Monte Carlo simulations have become a vital tool for predictive quality atomistic understanding of complex surface chemical reaction kinetics over a wide range of reaction conditions. In order to expand their practical value in terms of giving guidelines for the atomic level design of catalytic systems, it is very desirable to readily evaluate a sensitivity analysis for a given model. The result of such a sensitivity analysis quantitatively expresses the dependency of the turnover frequency, being the main output variable, on the rate constants entering the model. In the past, the application of sensitivity analysis, such as degree of rate control, has been hampered by its exuberant computational effort required to accurately sample numerical derivatives of a property that is obtained from a stochastic simulation method. In this study, we present an efficient and robust three-stage approach that is capable of reliably evaluating the sensitivity measures for stiff microkinetic models as we demonstrate using the CO oxidation on RuO2(110) as a prototypical reaction. In the first step, we utilize the Fisher information matrix for filtering out elementary processes which only yield negligible sensitivity. Then we employ an estimator based on the linear response theory for calculating the sensitivity measure for non-critical conditions which covers the majority of cases. Finally, we adapt a method for sampling coupled finite differences for evaluating the sensitivity measure for lattice based models. This allows for an efficient evaluation even in critical regions near a second order phase transition that are hitherto difficult to control. The combined approach leads to significant computational savings over straightforward numerical derivatives and should aid in accelerating the nano-scale design of heterogeneous catalysts.
NASA Technical Reports Server (NTRS)
1971-01-01
The findings, conclusions, and recommendations relative to the investigations conducted to evaluate tests for classifying pyrotechnic materials and end items as to their hazard potential are presented. Information required to establish an applicable means of determining the potential hazards of pyrotechnics is described. Hazard evaluations are based on the peak overpressure or impulse resulting from the explosion as a function of distance from the source. Other hazard classification tests include dust ignition sensitivity, impact ignition sensitivity, spark ignition sensitivity, and differential thermal analysis.
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
Zhai, Rong-Lin; Xu, Fei; Zhang, Pei; Zhang, Wan-Li; Wang, Hui; Wang, Ji-Liang; Cai, Kai-Lin; Long, Yue-Ping; Lu, Xiao-Ming; Tao, Kai-Xiong; Wang, Guo-Bin
2016-02-01
This meta-analysis was designed to evaluate the diagnostic performance of stool DNA testing for colorectal cancer (CRC) and compare the performance between single-gene and multiple-gene tests.MEDLINE, Cochrane, EMBASE databases were searched using keywords colorectal cancers, stool/fecal, sensitivity, specificity, DNA, and screening. Sensitivity analysis, quality assessments, and performance bias were performed for the included studies.Fifty-three studies were included in the analysis with a total sample size of 7524 patients. The studies were heterogeneous with regard to the genes being analyzed for fecal genetic biomarkers of CRC, as well as the laboratory methods being used for each assay. The sensitivity of the different assays ranged from 2% to 100% and the specificity ranged from 81% to 100%. The meta-analysis found that the pooled sensitivities for single- and multigene assays were 48.0% and 77.8%, respectively, while the pooled specificities were 97.0% and 92.7%. Receiver operator curves and diagnostic odds ratios showed no significant difference between both tests with regard to sensitivity or specificity.This meta-analysis revealed that using assays that evaluated multiple genes compared with single-gene assays did not increase the sensitivity or specificity of stool DNA testing in detecting CRC.
Sensitivity-Based Guided Model Calibration
NASA Astrophysics Data System (ADS)
Semnani, M.; Asadzadeh, M.
2017-12-01
A common practice in automatic calibration of hydrologic models is applying the sensitivity analysis prior to the global optimization to reduce the number of decision variables (DVs) by identifying the most sensitive ones. This two-stage process aims to improve the optimization efficiency. However, Parameter sensitivity information can be used to enhance the ability of the optimization algorithms to find good quality solutions in a fewer number of solution evaluations. This improvement can be achieved by increasing the focus of optimization on sampling from the most sensitive parameters in each iteration. In this study, the selection process of the dynamically dimensioned search (DDS) optimization algorithm is enhanced by utilizing a sensitivity analysis method to put more emphasis on the most sensitive decision variables for perturbation. The performance of DDS with the sensitivity information is compared to the original version of DDS for different mathematical test functions and a model calibration case study. Overall, the results show that DDS with sensitivity information finds nearly the same solutions as original DDS, however, in a significantly fewer number of solution evaluations.
Horita, Nobuyuki; Miyazawa, Naoki; Kojima, Ryota; Kimura, Naoko; Inoue, Miyo; Ishigatsubo, Yoshiaki; Kaneko, Takeshi
2013-11-01
Studies on the sensitivity and specificity of the Binax Now Streptococcus pneumonia urinary antigen test (index test) show considerable variance of results. Those written in English provided sufficient original data to evaluate the sensitivity and specificity of the index test using unconcentrated urine to identify S. pneumoniae infection in adults with pneumonia. Reference tests were conducted with at least one culture and/or smear. We estimated sensitivity and two specificities. One was the specificity evaluated using only patients with pneumonia of identified other aetiologies ('specificity (other)'). The other was the specificity evaluated based on both patients with pneumonia of unknown aetiology and those with pneumonia of other aetiologies ('specificity (unknown and other)') using a fixed model for meta-analysis. We found 10 articles involving 2315 patients. The analysis of 10 studies involving 399 patients yielded a pooled sensitivity of 0.75 (95% confidence interval: 0.71-0.79) without heterogeneity or publication bias. The analysis of six studies involving 258 patients yielded a pooled specificity (other) of 0.95 (95% confidence interval: 0.92-0.98) without no heterogeneity or publication bias. We attempted to conduct a meta-analysis with the 10 studies involving 1916 patients to estimate specificity (unknown and other), but it remained unclear due to moderate heterogeneity and possible publication bias. In our meta-analysis, sensitivity of the index test was moderate and specificity (other) was high; however, the specificity (unknown and other) remained unclear. © 2013 The Authors. Respirology © 2013 Asian Pacific Society of Respirology.
USDA-ARS?s Scientific Manuscript database
This paper provides an overview of the Model Optimization, Uncertainty, and SEnsitivity Analysis (MOUSE) software application, an open-source, Java-based toolbox of visual and numerical analysis components for the evaluation of environmental models. MOUSE is based on the OPTAS model calibration syst...
Marquezin, Maria Carolina Salomé; Pedroni-Pereira, Aline; Araujo, Darlle Santos; Rosar, João Vicente; Barbosa, Taís S; Castelo, Paula Midori
2016-08-01
The objective of this study is to better understand salivary and masticatory characteristics, this study evaluated the relationship among salivary parameters, bite force (BF), masticatory performance (MP) and gustatory sensitivity in healthy children. The secondary outcome was to evaluate possible gender differences. One hundred and sixteen eutrophic subjects aged 7-11 years old were evaluated, caries-free and with no definite need of orthodontic treatment. Salivary flow rate and pH, total protein (TP), alpha-amylase (AMY), calcium (CA) and phosphate (PHO) concentrations were determined in stimulated (SS) and unstimulated saliva (US). BF and MP were evaluated using digital gnathodynamometer and fractional sieving method, respectively. Gustatory sensitivity was determined by detecting the four primary tastes (sweet, salty, sour and bitter) in three different concentrations. Data were evaluated using descriptive statistics, Mann-Whitney/t-test, Spearman correlation and multiple regression analysis, considering α = 0.05. Significant positive correlation between taste and age was observed. CA and PHO concentrations correlated negatively with salivary flow and pH; sweet taste scores correlated with AMY concentrations and bitter taste sensitivity correlated with US flow rate (p < 0.05). No significant difference between genders in salivary, masticatory characteristics and gustatory sensitivity was observed. The regression analysis showed a weak relationship between the distribution of chewed particles among the different sieves and BF. The concentration of some analytes was influenced by salivary flow and pH. Age, saliva flow and AMY concentrations influenced gustatory sensitivity. In addition, salivary, masticatory and taste characteristics did not differ between genders, and only a weak relation between MP and BF was observed.
Kageyama, Shinji; Shinmura, Kazuya; Yamamoto, Hiroko; Goto, Masanori; Suzuki, Koichi; Tanioka, Fumihiko; Tsuneyoshi, Toshihiro; Sugimura, Haruhiko
2008-04-01
The PCR-based DNA fingerprinting method called the methylation-sensitive amplified fragment length polymorphism (MS-AFLP) analysis is used for genome-wide scanning of methylation status. In this study, we developed a method of fluorescence-labeled MS-AFLP (FL-MS-AFLP) analysis by applying a fluorescence-labeled primer and fluorescence-detecting electrophoresis apparatus to the existing method of MS-AFLP analysis. The FL-MS-AFLP analysis enables quantitative evaluation of more than 350 random CpG loci per run. It was shown to allow evaluation of the differences in methylation level of blood DNA of gastric cancer patients and evaluation of hypermethylation and hypomethylation in DNA from gastric cancer tissue in comparison with adjacent non-cancerous tissue.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Labanca, Ludimila; Alves, Cláudia Regina Lindgren; Bragança, Lidia Lourenço Cunha; Dorim, Diego Dias Ramos; Alvim, Cristina Gonçalves; Lemos, Stela Maris Aguiar
2015-01-01
To establish cutoff points for the analysis of the Behavior Observation Form (BOF) of children in the ages of 2 to 23 months and evaluate the sensitivity and specificity by age group and domains (Emission, Reception, and Cognitive Aspects of Language). The sample consisted of 752 children who underwent BOF. Each child was classified as having appropriate language development for the age or having possible risk of language impairment. Performance Indicators (PI) were calculated in each domain as well as the overall PI in all domains. The values for sensitivity and specificity were also calculated. The cutoff points for possible risk of language impairment for each domain and each age group were obtained using the receiver operating characteristics curve. The results of the study revealed that one-third of the assessed children have a risk of language impairment in the first two years of life. The analysis of BOF showed high sensitivity (>90%) in all categories and in all age groups; however, the chance of false-positive results was higher than 20% in the majority of aspects evaluated. It was possible to establish the cutoff points for all categories and age groups with good correlation between sensitivity and specificity, except for the age group of 2 to 6 months. This study provides important contributions to the discussion on the evaluation of the language development of children younger than 2 years.
DOT National Transportation Integrated Search
2017-02-08
The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Yan; Vyas, Anant D.; Guo, Zhaomiao
This report summarizes our evaluation of the potential energy-use and GHG-emissions reduction achieved by shifting freight from truck to rail under a most-likely scenario. A sensitivity analysis is also included. The sensitivity analysis shows changes in energy use and GHG emissions when key parameters are varied. The major contribution and distinction from previous studies is that this study considers the rail level of service (LOS) and commodity movements at the origin-destination (O-D) level. In addition, this study considers the fragility and time sensitivity of each commodity type.
Graves, Gabrielle S; Adam, Murtaza K; Stepien, Kimberly E; Han, Dennis P
2014-08-01
To evaluate sensitivity, specificity and reproducibility of colour difference plot analysis (CDPA) of 103 hexagon multifocal electroretinogram (mfERG) in detecting established hydroxychloroquine (HCQ) retinal toxicity. Twenty-three patients taking HCQ were divided into those with and without retinal toxicity and were compared with a control group without retinal disease and not taking HCQ. CDPA with two masked examiners was performed using age-corrected mfERG responses in the central ring (Rc ; 0-5.5 degrees from fixation) and paracentral ring (Rp ; 5.5-11 degrees from fixation). An abnormal ring was defined as containing any hexagons with a difference in two or more standard deviations from normal (colour blue or black). Categorical analysis (ring involvement or not) showed Rc had 83% sensitivity and 93% specificity. Rp had 89% sensitivity and 82% specificity. Requiring abnormal hexagons in both Rc and Rp yielded sensitivity and specificity of 83% and 95%, respectively. If required in only one ring, they were 89% and 80%, respectively. In this population, there was complete agreement in identifying toxicity when comparing CDPA using Rp with ring ratio analysis using R5/R4 P1 ring responses (89% sensitivity and 95% specificity). Continuous analysis of CDPA with receiver operating characteristic analysis showed optimized detection (83% sensitivity and 96% specificity) when ≥4 abnormal hexagons were present anywhere within the Rp ring outline. Intergrader agreement and reproducibility were good. Colour difference plot analysis had sensitivity and specificity that approached that of ring ratio analysis of R5/R4 P₁ responses. Ease of implementation and reproducibility are notable advantages of CDPA. © 2014 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.
Rico, Andreu; Van den Brink, Paul J
2015-08-01
In the present study, the authors evaluated the vulnerability of aquatic invertebrates to insecticides based on their intrinsic sensitivity and their population-level recovery potential. The relative sensitivity of invertebrates to 5 different classes of insecticides was calculated at the genus, family, and order levels using the acute toxicity data available in the US Environmental Protection Agency ECOTOX database. Biological trait information was linked to the calculated relative sensitivity to evaluate correlations between traits and sensitivity and to calculate a vulnerability index, which combines intrinsic sensitivity and traits describing the recovery potential of populations partially exposed to insecticides (e.g., voltinism, flying strength, occurrence in drift). The analysis shows that the relative sensitivity of arthropods depends on the insecticide mode of action. Traits such as degree of sclerotization, size, and respiration type showed good correlation to sensitivity and can be used to make predictions for invertebrate taxa without a priori sensitivity knowledge. The vulnerability analysis revealed that some of the Ephemeroptera, Plecoptera, and Trichoptera taxa were vulnerable to all insecticide classes and indicated that particular gastropod and bivalve species were potentially vulnerable. Microcrustaceans (e.g., daphnids, copepods) showed low potential vulnerability, particularly in lentic ecosystems. The methods described in the present study can be used for the selection of focal species to be included as part of ecological scenarios and higher tier risk assessments. © 2015 SETAC.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Zhong, Lin-sheng; Tang, Cheng-cai; Guo, Hua
2010-07-01
Based on the statistical data of natural ecology and social economy in Jinyintan Grassland Scenic Area in Qinghai Province in 2008, an evaluation index system for the ecological sensitivity of this area was established from the aspects of protected area rank, vegetation type, slope, and land use type. The ecological sensitivity of the sub-areas with higher tourism value and ecological function in the area was evaluated, and the tourism function zoning of these sub-areas was made by the technology of GIS and according to the analysis of eco-environmental characteristics and ecological sensitivity of each sensitive sub-area. It was suggested that the Jinyintan Grassland Scenic Area could be divided into three ecological sensitivity sub-areas (high, moderate, and low), three tourism functional sub-areas (restricted development ecotourism, moderate development ecotourism, and mass tourism), and six tourism functional sub-areas (wetland protection, primitive ecological sightseeing, agriculture and pasture tourism, grassland tourism, town tourism, and rural tourism).
JUPITER PROJECT - JOINT UNIVERSAL PARAMETER IDENTIFICATION AND EVALUATION OF RELIABILITY
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project builds on the technology of two widely used codes for sensitivity analysis, data assessment, calibration, and uncertainty analysis of environmental models: PEST and UCODE.
Zur, RM; Roy, LM; Ito, S; Beyene, J; Carew, C; Ungar, WJ
2016-01-01
Thiopurine S-methyltransferase (TPMT) deficiency increases the risk of serious adverse events in persons receiving thiopurines. The objective was to synthesize reported sensitivity and specificity of TPMT phenotyping and genotyping using a latent class hierarchical summary receiver operating characteristic meta-analysis. In 27 studies, pooled sensitivity and specificity of phenotyping for deficient individuals was 75.9% (95% credible interval (CrI), 58.3–87.0%) and 98.9% (96.3–100%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 90.4% (79.1–99.4%) and 100.0% (99.9–100%), respectively. For individuals with deficient or intermediate activity, phenotype sensitivity and specificity was 91.3% (86.4–95.5%) and 92.6% (86.5–96.6%), respectively. For genotype tests evaluating TPMT*2 and TPMT*3, sensitivity and specificity was 88.9% (81.6–97.5%) and 99.2% (98.4–99.9%), respectively. Genotyping has higher sensitivity as long as TPMT*2 and TPMT*3 are tested. Both approaches display high specificity. Latent class meta-analysis is a useful method for synthesizing diagnostic test performance data for clinical practice guidelines. PMID:27217052
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha
2013-01-01
Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167
Omitted Variable Sensitivity Analysis with the Annotated Love Plot
ERIC Educational Resources Information Center
Hansen, Ben B.; Fredrickson, Mark M.
2014-01-01
The goal of this research is to make sensitivity analysis accessible not only to empirical researchers but also to the various stakeholders for whom educational evaluations are conducted. To do this it derives anchors for the omitted variable (OV)-program participation association intrinsically, using the Love plot to present a wide range of…
NASA Astrophysics Data System (ADS)
García-Moreno, Angel-Iván; González-Barbosa, José-Joel; Ramírez-Pedraza, Alfonso; Hurtado-Ramos, Juan B.; Ornelas-Rodriguez, Francisco-Javier
2016-04-01
Computer-based reconstruction models can be used to approximate urban environments. These models are usually based on several mathematical approximations and the usage of different sensors, which implies dependency on many variables. The sensitivity analysis presented in this paper is used to weigh the relative importance of each uncertainty contributor into the calibration of a panoramic camera-LiDAR system. Both sensors are used for three-dimensional urban reconstruction. Simulated and experimental tests were conducted. For the simulated tests we analyze and compare the calibration parameters using the Monte Carlo and Latin hypercube sampling techniques. Sensitivity analysis for each variable involved into the calibration was computed by the Sobol method, which is based on the analysis of the variance breakdown, and the Fourier amplitude sensitivity test method, which is based on Fourier's analysis. Sensitivity analysis is an essential tool in simulation modeling and for performing error propagation assessments.
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
NASA Astrophysics Data System (ADS)
Razavi, S.; Gupta, H. V.
2014-12-01
Sensitivity analysis (SA) is an important paradigm in the context of Earth System model development and application, and provides a powerful tool that serves several essential functions in modelling practice, including 1) Uncertainty Apportionment - attribution of total uncertainty to different uncertainty sources, 2) Assessment of Similarity - diagnostic testing and evaluation of similarities between the functioning of the model and the real system, 3) Factor and Model Reduction - identification of non-influential factors and/or insensitive components of model structure, and 4) Factor Interdependence - investigation of the nature and strength of interactions between the factors, and the degree to which factors intensify, cancel, or compensate for the effects of each other. A variety of sensitivity analysis approaches have been proposed, each of which formally characterizes a different "intuitive" understanding of what is meant by the "sensitivity" of one or more model responses to its dependent factors (such as model parameters or forcings). These approaches are based on different philosophies and theoretical definitions of sensitivity, and range from simple local derivatives and one-factor-at-a-time procedures to rigorous variance-based (Sobol-type) approaches. In general, each approach focuses on, and identifies, different features and properties of the model response and may therefore lead to different (even conflicting) conclusions about the underlying sensitivity. This presentation revisits the theoretical basis for sensitivity analysis, and critically evaluates existing approaches so as to demonstrate their flaws and shortcomings. With this background, we discuss several important properties of response surfaces that are associated with the understanding and interpretation of sensitivity. Finally, a new approach towards global sensitivity assessment is developed that is consistent with important properties of Earth System model response surfaces.
DOT National Transportation Integrated Search
2013-08-01
The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...
Nuclear Data Needs for the Neutronic Design of MYRRHA Fast Spectrum Research Reactor
NASA Astrophysics Data System (ADS)
Stankovskiy, A.; Malambu, E.; Van den Eynde, G.; Díez, C. J.
2014-04-01
A global sensitivity analysis of effective neutron multiplication factor to the change of nuclear data library has been performed. It revealed that the test version of JEFF-3.2 neutron-induced evaluated data library produces closer results to ENDF/B-VII.1 than JEFF-3.1.2 does. The analysis of contributions of individual evaluations into keff sensitivity resulted in the priority list of nuclides, uncertainties on cross sections and fission neutron multiplicities of which have to be improved by setting up dedicated differential and integral experiments.
On the Validity and Sensitivity of the Phonics Screening Check: Erratum and Further Analysis
ERIC Educational Resources Information Center
Gilchrist, James M.; Snowling, Margaret J.
2018-01-01
Duff, Mengoni, Bailey and Snowling ("Journal of Research in Reading," 38: 109-123; 2015) evaluated the sensitivity and specificity of the phonics screening check against two reference standards. This report aims to correct a minor data error in the original article and to present further analysis of the data. The methods used are…
Anxiety Sensitivity and the Anxiety Disorders: A Meta-Analytic Review and Synthesis
ERIC Educational Resources Information Center
Olatunji, Bunmi O.; Wolitzky-Taylor, Kate B.
2009-01-01
There has been significant interest in the role of anxiety sensitivity (AS) in the anxiety disorders. In this meta-analysis, we empirically evaluate differences in AS between anxiety disorders, mood disorders, and nonclinical controls. A total of 38 published studies (N = 20,146) were included in the analysis. The results yielded a large effect…
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Hansen, James E. (Technical Monitor)
2001-01-01
A new approach is presented for the analysis of feedback processes in a nonlinear dynamical system by observing its variations. The new methodology consists of statistical estimates of the sensitivities between all pairs of variables in the system based on a neural network modeling of the dynamical system. The model can then be used to estimate the instantaneous, multivariate and nonlinear sensitivities, which are shown to be essential for the analysis of the feedbacks processes involved in the dynamical system. The method is described and tested on synthetic data from the low-order Lorenz circulation model where the correct sensitivities can be evaluated analytically.
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Spike shape analysis of electromyography for parkinsonian tremor evaluation.
Marusiak, Jarosław; Andrzejewska, Renata; Świercz, Dominika; Kisiel-Sajewicz, Katarzyna; Jaskólska, Anna; Jaskólski, Artur
2015-12-01
Standard electromyography (EMG) parameters have limited utility for evaluation of Parkinson disease (PD) tremor. Spike shape analysis (SSA) EMG parameters are more sensitive than standard EMG parameters for studying motor control mechanisms in healthy subjects. SSA of EMG has not been used to assess parkinsonian tremor. This study assessed the utility of SSA and standard time and frequency analysis for electromyographic evaluation of PD-related resting tremor. We analyzed 1-s periods of EMG recordings to detect nontremor and tremor signals in relaxed biceps brachii muscle of seven mild to moderate PD patients. SSA revealed higher mean spike amplitude, duration, and slope and lower mean spike frequency in tremor signals than in nontremor signals. Standard EMG parameters (root mean square, median, and mean frequency) did not show differences between the tremor and nontremor signals. SSA of EMG data is a sensitive method for parkinsonian tremor evaluation. © 2015 Wiley Periodicals, Inc.
Kataoka, K; Nakamura, K; Mizusawa, J; Kato, K; Eba, J; Katayama, H; Shibata, T; Fukuda, H
2017-10-01
There have been no reports evaluating progression-free survival (PFS) as a surrogate endpoint in resectable esophageal cancer. This study was conducted to evaluate the trial level correlations between PFS and overall survival (OS) in resectable esophageal cancer with preoperative therapy and to explore the potential benefit of PFS as a surrogate endpoint for OS. A systematic literature search of randomized trials with preoperative chemotherapy or preoperative chemoradiotherapy for esophageal cancer reported from January 1990 to September 2014 was conducted using PubMed and the Cochrane Library. Weighted linear regression using sample size of each trial as a weight was used to estimate coefficient of determination (R 2 ) within PFS and OS. The primary analysis included trials in which the HR for both PFS and OS was reported. The sensitivity analysis included trials in which either HR or median survival time of PFS and OS was reported. In the sensitivity analysis, HR was estimated from the median survival time of PFS and OS, assuming exponential distribution. Of 614 articles, 10 trials were selected for the primary analysis and 15 for the sensitivity analysis. The primary analysis did not show a correlation between treatment effects on PFS and OS (R 2 0.283, 95% CI [0.00-0.90]). The sensitivity analysis did not show an association between PFS and OS (R 2 0.084, 95% CI [0.00-0.70]). Although the number of randomized controlled trials evaluating preoperative therapy for esophageal cancer is limited at the moment, PFS is not suitable for primary endpoint as a surrogate endpoint for OS. Copyright © 2017 Elsevier Ltd, BASO ~ The Association for Cancer Surgery, and the European Society of Surgical Oncology. All rights reserved.
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.; Korivi, Vamshi M.
1991-01-01
A gradient-based design optimization strategy for practical aerodynamic design applications is presented, which uses the 2D thin-layer Navier-Stokes equations. The strategy is based on the classic idea of constructing different modules for performing the major tasks such as function evaluation, function approximation and sensitivity analysis, mesh regeneration, and grid sensitivity analysis, all driven and controlled by a general-purpose design optimization program. The accuracy of aerodynamic shape sensitivity derivatives is validated on two viscous test problems: internal flow through a double-throat nozzle and external flow over a NACA 4-digit airfoil. A significant improvement in aerodynamic performance has been achieved in both cases. Particular attention is given to a consistent treatment of the boundary conditions in the calculation of the aerodynamic sensitivity derivatives for the classic problems of external flow over an isolated lifting airfoil on 'C' or 'O' meshes.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
Zhao, Yueyuan; Zhang, Xuefeng; Zhu, Fengcai; Jin, Hui; Wang, Bei
2016-08-02
Objective To estimate the cost-effectiveness of hepatitis E vaccination among pregnant women in epidemic regions. Methods A decision tree model was constructed to evaluate the cost-effectiveness of 3 hepatitis E virus vaccination strategies from societal perspectives. The model parameters were estimated on the basis of published studies and experts' experience. Sensitivity analysis was used to evaluate the uncertainties of the model. Results Vaccination was more economically effective on the basis of the incremental cost-effectiveness ratio (ICER< 3 times China's per capital gross domestic product/quality-adjusted life years); moreover, screening and vaccination had higher QALYs and lower costs compared with universal vaccination. No parameters significantly impacted ICER in one-way sensitivity analysis, and probabilistic sensitivity analysis also showed screening and vaccination to be the dominant strategy. Conclusion Screening and vaccination is the most economical strategy for pregnant women in epidemic regions; however, further studies are necessary to confirm the efficacy and safety of the hepatitis E vaccines.
Applying geologic sensitivity analysis to environmental risk management: The financial implications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rogers, D.T.
The financial risks associated with environmental contamination can be staggering and are often difficult to identify and accurately assess. Geologic sensitivity analysis is gaining recognition as a significant and useful tool that can empower the user with crucial information concerning environmental risk management and brownfield redevelopment. It is particularly useful when (1) evaluating the potential risks associated with redevelopment of historical industrial facilities (brownfields) and (2) planning for future development, especially in areas of rapid development because the number of potential contaminating sources often increases with an increase in economic development. An examination of the financial implications relating to geologicmore » sensitivity analysis in southeastern Michigan from numerous case studies indicate that the environmental cost of contamination may be 100 to 1,000 times greater at a geologically sensitive location compared to the least sensitive location. Geologic sensitivity analysis has demonstrated that near-surface geology may influence the environmental impact of a contaminated site to a greater extent than the amount and type of industrial development.« less
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-01-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
The impact of missing trauma data on predicting massive transfusion
Trickey, Amber W.; Fox, Erin E.; del Junco, Deborah J.; Ning, Jing; Holcomb, John B.; Brasel, Karen J.; Cohen, Mitchell J.; Schreiber, Martin A.; Bulger, Eileen M.; Phelan, Herb A.; Alarcon, Louis H.; Myers, John G.; Muskat, Peter; Cotton, Bryan A.; Wade, Charles E.; Rahbar, Mohammad H.
2013-01-01
INTRODUCTION Missing data are inherent in clinical research and may be especially problematic for trauma studies. This study describes a sensitivity analysis to evaluate the impact of missing data on clinical risk prediction algorithms. Three blood transfusion prediction models were evaluated utilizing an observational trauma dataset with valid missing data. METHODS The PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study included patients requiring ≥ 1 unit of red blood cells (RBC) at 10 participating U.S. Level I trauma centers from July 2009 – October 2010. Physiologic, laboratory, and treatment data were collected prospectively up to 24h after hospital admission. Subjects who received ≥ 10 RBC units within 24h of admission were classified as massive transfusion (MT) patients. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation. A sensitivity analysis for missing data was conducted to determine the upper and lower bounds for correct classification percentages. RESULTS PROMMTT enrolled 1,245 subjects. MT was received by 297 patients (24%). Missing percentage ranged from 2.2% (heart rate) to 45% (respiratory rate). Proportions of complete cases utilized in the MT prediction models ranged from 41% to 88%. All models demonstrated similar correct classification percentages using complete case analysis and multiple imputation. In the sensitivity analysis, correct classification upper-lower bound ranges per model were 4%, 10%, and 12%. Predictive accuracy for all models using PROMMTT data was lower than reported in the original datasets. CONCLUSIONS Evaluating the accuracy clinical prediction models with missing data can be misleading, especially with many predictor variables and moderate levels of missingness per variable. The proposed sensitivity analysis describes the influence of missing data on risk prediction algorithms. Reporting upper/lower bounds for percent correct classification may be more informative than multiple imputation, which provided similar results to complete case analysis in this study. PMID:23778514
Analysis of the sensitivity properties of a model of vector-borne bubonic plague.
Buzby, Megan; Neckels, David; Antolin, Michael F; Estep, Donald
2008-09-06
Model sensitivity is a key to evaluation of mathematical models in ecology and evolution, especially in complex models with numerous parameters. In this paper, we use some recently developed methods for sensitivity analysis to study the parameter sensitivity of a model of vector-borne bubonic plague in a rodent population proposed by Keeling & Gilligan. The new sensitivity tools are based on a variational analysis involving the adjoint equation. The new approach provides a relatively inexpensive way to obtain derivative information about model output with respect to parameters. We use this approach to determine the sensitivity of a quantity of interest (the force of infection from rats and their fleas to humans) to various model parameters, determine a region over which linearization at a specific parameter reference point is valid, develop a global picture of the output surface, and search for maxima and minima in a given region in the parameter space.
Gordon, H R; Du, T; Zhang, T
1997-09-20
We provide an analysis of the influence of instrument polarization sensitivity on the radiance measured by spaceborne ocean color sensors. Simulated examples demonstrate the influence of polarization sensitivity on the retrieval of the water-leaving reflectance rho(w). A simple method for partially correcting for polarization sensitivity--replacing the linear polarization properties of the top-of-atmosphere reflectance with those from a Rayleigh-scattering atmosphere--is provided and its efficacy is evaluated. It is shown that this scheme improves rho(w) retrievals as long as the polarization sensitivity of the instrument does not vary strongly from band to band. Of course, a complete polarization-sensitivity characterization of the ocean color sensor is required to implement the correction.
ERIC Educational Resources Information Center
Crosland, Kimberly A.; Zarcone, Jennifer R.; Schroeder, Stephen; Zarcane, Troy; Fowler, Stephen
2005-01-01
Stereotyped movements displayed by 6 participants and tics displayed by 6 children were evaluated using an antecedent behavioral analysis and a force sensitive platform. We found that tics occurred more often in an alone condition when compared to high preference toy and play conditions, whereas stereotyped movements were more variable across…
A new sensitivity analysis for structural optimization of composite rotor blades
NASA Technical Reports Server (NTRS)
Venkatesan, C.; Friedmann, P. P.; Yuan, Kuo-An
1993-01-01
This paper presents a detailed mathematical derivation of the sensitivity derivatives for the structural dynamic, aeroelastic stability and response characteristics of a rotor blade in hover and forward flight. The formulation is denoted by the term semianalytical approach, because certain derivatives have to be evaluated by a finite difference scheme. Using the present formulation, sensitivity derivatives for the structural dynamic and aeroelastic stability characteristics, were evaluated for both isotropic and composite rotor blades. Based on the results, useful conclusions are obtained regarding the relative merits of the semi-analytical approach, for calculating sensitivity derivatives, when compared to a pure finite difference approach.
Identification of stochastic interactions in nonlinear models of structural mechanics
NASA Astrophysics Data System (ADS)
Kala, Zdeněk
2017-07-01
In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars
Skelley, Alison M.; Scherer, James R.; Aubrey, Andrew D.; Grover, William H.; Ivester, Robin H. C.; Ehrenfreund, Pascale; Grunthaner, Frank J.; Bada, Jeffrey L.; Mathies, Richard A.
2005-01-01
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10–600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment. PMID:15657130
Development and evaluation of a microdevice for amino acid biomarker detection and analysis on Mars.
Skelley, Alison M; Scherer, James R; Aubrey, Andrew D; Grover, William H; Ivester, Robin H C; Ehrenfreund, Pascale; Grunthaner, Frank J; Bada, Jeffrey L; Mathies, Richard A
2005-01-25
The Mars Organic Analyzer (MOA), a microfabricated capillary electrophoresis (CE) instrument for sensitive amino acid biomarker analysis, has been developed and evaluated. The microdevice consists of a four-wafer sandwich combining glass CE separation channels, microfabricated pneumatic membrane valves and pumps, and a nanoliter fluidic network. The portable MOA instrument integrates high voltage CE power supplies, pneumatic controls, and fluorescence detection optics necessary for field operation. The amino acid concentration sensitivities range from micromolar to 0.1 nM, corresponding to part-per-trillion sensitivity. The MOA was first used in the lab to analyze soil extracts from the Atacama Desert, Chile, detecting amino acids ranging from 10-600 parts per billion. Field tests of the MOA in the Panoche Valley, CA, successfully detected amino acids at 70 parts per trillion to 100 parts per billion in jarosite, a sulfate-rich mineral associated with liquid water that was recently detected on Mars. These results demonstrate the feasibility of using the MOA to perform sensitive in situ amino acid biomarker analysis on soil samples representative of a Mars-like environment.
Metrics for evaluating performance and uncertainty of Bayesian network models
Bruce G. Marcot
2012-01-01
This paper presents a selected set of existing and new metrics for gauging Bayesian network model performance and uncertainty. Selected existing and new metrics are discussed for conducting model sensitivity analysis (variance reduction, entropy reduction, case file simulation); evaluating scenarios (influence analysis); depicting model complexity (numbers of model...
Kortink, Elise D; Weeda, Wouter D; Crowley, Michael J; Gunther Moor, Bregtje; van der Molen, Melle J W
2018-06-01
Monitoring social threat is essential for maintaining healthy social relationships, and recent studies suggest a neural alarm system that governs our response to social rejection. Frontal-midline theta (4-8 Hz) oscillatory power might act as a neural correlate of this system by being sensitive to unexpected social rejection. Here, we examined whether frontal-midline theta is modulated by individual differences in personality constructs sensitive to social disconnection. In addition, we examined the sensitivity of feedback-related brain potentials (i.e., the feedback-related negativity and P3) to social feedback. Sixty-five undergraduate female participants (mean age = 19.69 years) participated in the Social Judgment Paradigm, a fictitious peer-evaluation task in which participants provided expectancies about being liked/disliked by peer strangers. Thereafter, they received feedback signaling social acceptance/rejection. A community structure analysis was employed to delineate personality profiles in our data. Results provided evidence of two subgroups: one group scored high on attachment-related anxiety and fear of negative evaluation, whereas the other group scored high on attachment-related avoidance and low on fear of negative evaluation. In both groups, unexpected rejection feedback yielded a significant increase in theta power. The feedback-related negativity was sensitive to unexpected feedback, regardless of valence, and was largest for unexpected rejection feedback. The feedback-related P3 was significantly enhanced in response to expected social acceptance feedback. Together, these findings confirm the sensitivity of frontal midline theta oscillations to the processing of social threat, and suggest that this alleged neural alarm system behaves similarly in individuals that differ in personality constructs relevant to social evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Estep, Donald
2015-11-30
This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2015-05-01
Sensitivity analysis is an essential paradigm in Earth and Environmental Systems modeling. However, the term "sensitivity" has a clear definition, based in partial derivatives, only when specified locally around a particular point (e.g., optimal solution) in the problem space. Accordingly, no unique definition exists for "global sensitivity" across the problem space, when considering one or more model responses to different factors such as model parameters or forcings. A variety of approaches have been proposed for global sensitivity analysis, based on different philosophies and theories, and each of these formally characterizes a different "intuitive" understanding of sensitivity. These approaches focus on different properties of the model response at a fundamental level and may therefore lead to different (even conflicting) conclusions about the underlying sensitivities. Here we revisit the theoretical basis for sensitivity analysis, summarize and critically evaluate existing approaches in the literature, and demonstrate their flaws and shortcomings through conceptual examples. We also demonstrate the difficulty involved in interpreting "global" interaction effects, which may undermine the value of existing interpretive approaches. With this background, we identify several important properties of response surfaces that are associated with the understanding and interpretation of sensitivities in the context of Earth and Environmental System models. Finally, we highlight the need for a new, comprehensive framework for sensitivity analysis that effectively characterizes all of the important sensitivity-related properties of model response surfaces.
Global Sensitivity Analysis for Process Identification under Model Uncertainty
NASA Astrophysics Data System (ADS)
Ye, M.; Dai, H.; Walker, A. P.; Shi, L.; Yang, J.
2015-12-01
The environmental system consists of various physical, chemical, and biological processes, and environmental models are always built to simulate these processes and their interactions. For model building, improvement, and validation, it is necessary to identify important processes so that limited resources can be used to better characterize the processes. While global sensitivity analysis has been widely used to identify important processes, the process identification is always based on deterministic process conceptualization that uses a single model for representing a process. However, environmental systems are complex, and it happens often that a single process may be simulated by multiple alternative models. Ignoring the model uncertainty in process identification may lead to biased identification in that identified important processes may not be so in the real world. This study addresses this problem by developing a new method of global sensitivity analysis for process identification. The new method is based on the concept of Sobol sensitivity analysis and model averaging. Similar to the Sobol sensitivity analysis to identify important parameters, our new method evaluates variance change when a process is fixed at its different conceptualizations. The variance considers both parametric and model uncertainty using the method of model averaging. The method is demonstrated using a synthetic study of groundwater modeling that considers recharge process and parameterization process. Each process has two alternative models. Important processes of groundwater flow and transport are evaluated using our new method. The method is mathematically general, and can be applied to a wide range of environmental problems.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in 2008 identified many promising sensitivity analysis and un-certainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not known. The STO Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic problems of interest to NATO. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper presents an overview of the AVT-191 program content.
NASA Technical Reports Server (NTRS)
Benek, John A.; Luckring, James M.
2017-01-01
A NATO symposium held in Greece in 2008 identified many promising sensitivity analysis and uncertainty quantification technologies, but the maturity and suitability of these methods for realistic applications was not clear. The NATO Science and Technology Organization, Task Group AVT-191 was established to evaluate the maturity and suitability of various sensitivity analysis and uncertainty quantification methods for application to realistic vehicle development problems. The program ran from 2011 to 2015, and the work was organized into four discipline-centric teams: external aerodynamics, internal aerodynamics, aeroelasticity, and hydrodynamics. This paper summarizes findings and lessons learned from the task group.
PC-BASED SUPERCOMPUTING FOR UNCERTAINTY AND SENSITIVITY ANALYSIS OF MODELS
Evaluating uncertainty and sensitivity of multimedia environmental models that integrate assessments of air, soil, sediments, groundwater, and surface water is a difficult task. It can be an enormous undertaking even for simple, single-medium models (i.e. groundwater only) descr...
Are LOD and LOQ Reliable Parameters for Sensitivity Evaluation of Spectroscopic Methods?
Ershadi, Saba; Shayanfar, Ali
2018-03-22
The limit of detection (LOD) and the limit of quantification (LOQ) are common parameters to assess the sensitivity of analytical methods. In this study, the LOD and LOQ of previously reported terbium sensitized analysis methods were calculated by different methods, and the results were compared with sensitivity parameters [lower limit of quantification (LLOQ)] of U.S. Food and Drug Administration guidelines. The details of the calibration curve and standard deviation of blank samples of three different terbium-sensitized luminescence methods for the quantification of mycophenolic acid, enrofloxacin, and silibinin were used for the calculation of LOD and LOQ. A comparison of LOD and LOQ values calculated by various methods and LLOQ shows a considerable difference. The significant difference of the calculated LOD and LOQ with various methods and LLOQ should be considered in the sensitivity evaluation of spectroscopic methods.
Herrera, Melina E; Mobilia, Liliana N; Posse, Graciela R
2011-01-01
The objective of this study is to perform a comparative evaluation of the prediffusion and minimum inhibitory concentration (MIC) methods for the detection of sensitivity to colistin, and to detect Acinetobacter baumanii-calcoaceticus complex (ABC) heteroresistant isolates to colistin. We studied 75 isolates of ABC recovered from clinically significant samples obtained from various centers. Sensitivity to colistin was determined by prediffusion as well as by MIC. All the isolates were sensitive to colistin, with MIC = 2µg/ml. The results were analyzed by dispersion graph and linear regression analysis, revealing that the prediffusion method did not correlate with the MIC values for isolates sensitive to colistin (r² = 0.2017). Detection of heteroresistance to colistin was determined by plaque efficiency of all the isolates with the same initial MICs of 2, 1, and 0.5 µg/ml, which resulted in 14 of them with a greater than 8-fold increase in the MIC in some cases. When the sensitivity of these resistant colonies was determined by prediffusion, the resulting dispersion graph and linear regression analysis yielded an r² = 0.604, which revealed a correlation between the methodologies used.
Sensitivity Analysis for some Water Pollution Problem
NASA Astrophysics Data System (ADS)
Le Dimet, François-Xavier; Tran Thu, Ha; Hussaini, Yousuff
2014-05-01
Sensitivity Analysis for Some Water Pollution Problems Francois-Xavier Le Dimet1 & Tran Thu Ha2 & M. Yousuff Hussaini3 1Université de Grenoble, France, 2Vietnamese Academy of Sciences, 3 Florida State University Sensitivity analysis employs some response function and the variable with respect to which its sensitivity is evaluated. If the state of the system is retrieved through a variational data assimilation process, then the observation appears only in the Optimality System (OS). In many cases, observations have errors and it is important to estimate their impact. Therefore, sensitivity analysis has to be carried out on the OS, and in that sense sensitivity analysis is a second order property. The OS can be considered as a generalized model because it contains all the available information. This presentation proposes a method to carry out sensitivity analysis in general. The method is demonstrated with an application to water pollution problem. The model involves shallow waters equations and an equation for the pollutant concentration. These equations are discretized using a finite volume method. The response function depends on the pollutant source, and its sensitivity with respect to the source term of the pollutant is studied. Specifically, we consider: • Identification of unknown parameters, and • Identification of sources of pollution and sensitivity with respect to the sources. We also use a Singular Evolutive Interpolated Kalman Filter to study this problem. The presentation includes a comparison of the results from these two methods. .
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
NASA Astrophysics Data System (ADS)
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-12-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
NASA Astrophysics Data System (ADS)
Graham, Eleanor; Cuore Collaboration
2017-09-01
The CUORE experiment is a large-scale bolometric detector seeking to observe the never-before-seen process of neutrinoless double beta decay. Predictions for CUORE's sensitivity to neutrinoless double beta decay allow for an understanding of the half-life ranges that the detector can probe, and also to evaluate the relative importance of different detector parameters. Currently, CUORE uses a Bayesian analysis based in BAT, which uses Metropolis-Hastings Markov Chain Monte Carlo, for its sensitivity studies. My work evaluates the viability and potential improvements of switching the Bayesian analysis to Hamiltonian Monte Carlo, realized through the program Stan and its Morpho interface. I demonstrate that the BAT study can be successfully recreated in Stan, and perform a detailed comparison between the results and computation times of the two methods.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Reconciling uncertain costs and benefits in bayes nets for invasive species management
Burgman, M.A.; Wintle, B.A.; Thompson, C.A.; Moilanen, A.; Runge, M.C.; Ben-Haim, Y.
2010-01-01
Bayes nets are used increasingly to characterize environmental systems and formalize probabilistic reasoning to support decision making. These networks treat probabilities as exact quantities. Sensitivity analysis can be used to evaluate the importance of assumptions and parameter estimates. Here, we outline an application of info-gap theory to Bayes nets that evaluates the sensitivity of decisions to possibly large errors in the underlying probability estimates and utilities. We apply it to an example of management and eradication of Red Imported Fire Ants in Southern Queensland, Australia and show how changes in management decisions can be justified when uncertainty is considered. ?? 2009 Society for Risk Analysis.
Bayesian Sensitivity Analysis of Statistical Models with Missing Data
ZHU, HONGTU; IBRAHIM, JOSEPH G.; TANG, NIANSHENG
2013-01-01
Methods for handling missing data depend strongly on the mechanism that generated the missing values, such as missing completely at random (MCAR) or missing at random (MAR), as well as other distributional and modeling assumptions at various stages. It is well known that the resulting estimates and tests may be sensitive to these assumptions as well as to outlying observations. In this paper, we introduce various perturbations to modeling assumptions and individual observations, and then develop a formal sensitivity analysis to assess these perturbations in the Bayesian analysis of statistical models with missing data. We develop a geometric framework, called the Bayesian perturbation manifold, to characterize the intrinsic structure of these perturbations. We propose several intrinsic influence measures to perform sensitivity analysis and quantify the effect of various perturbations to statistical models. We use the proposed sensitivity analysis procedure to systematically investigate the tenability of the non-ignorable missing at random (NMAR) assumption. Simulation studies are conducted to evaluate our methods, and a dataset is analyzed to illustrate the use of our diagnostic measures. PMID:24753718
Xi, Qing; Li, Zhao-Fu; Luo, Chuan
2014-05-01
Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.
Space tug economic analysis study. Volume 2: Tug concepts analysis. Part 2: Economic analysis
NASA Technical Reports Server (NTRS)
1972-01-01
An economic analysis of space tug operations is presented. The subjects discussed are: (1) cost uncertainties, (2) scenario analysis, (3) economic sensitivities, (4) mixed integer programming formulation of the space tug problem, and (5) critical parameters in the evaluation of a public expenditure.
Gan, Yanjun; Duan, Qingyun; Gong, Wei; ...
2014-01-01
Sensitivity analysis (SA) is a commonly used approach for identifying important parameters that dominate model behaviors. We use a newly developed software package, a Problem Solving environment for Uncertainty Analysis and Design Exploration (PSUADE), to evaluate the effectiveness and efficiency of ten widely used SA methods, including seven qualitative and three quantitative ones. All SA methods are tested using a variety of sampling techniques to screen out the most sensitive (i.e., important) parameters from the insensitive ones. The Sacramento Soil Moisture Accounting (SAC-SMA) model, which has thirteen tunable parameters, is used for illustration. The South Branch Potomac River basin nearmore » Springfield, West Virginia in the U.S. is chosen as the study area. The key findings from this study are: (1) For qualitative SA methods, Correlation Analysis (CA), Regression Analysis (RA), and Gaussian Process (GP) screening methods are shown to be not effective in this example. Morris One-At-a-Time (MOAT) screening is the most efficient, needing only 280 samples to identify the most important parameters, but it is the least robust method. Multivariate Adaptive Regression Splines (MARS), Delta Test (DT) and Sum-Of-Trees (SOT) screening methods need about 400–600 samples for the same purpose. Monte Carlo (MC), Orthogonal Array (OA) and Orthogonal Array based Latin Hypercube (OALH) are appropriate sampling techniques for them; (2) For quantitative SA methods, at least 2777 samples are needed for Fourier Amplitude Sensitivity Test (FAST) to identity parameter main effect. McKay method needs about 360 samples to evaluate the main effect, more than 1000 samples to assess the two-way interaction effect. OALH and LPτ (LPTAU) sampling techniques are more appropriate for McKay method. For the Sobol' method, the minimum samples needed are 1050 to compute the first-order and total sensitivity indices correctly. These comparisons show that qualitative SA methods are more efficient but less accurate and robust than quantitative ones.« less
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Simplified methods for evaluating road prism stability
William J. Elliot; Mark Ballerini; David Hall
2003-01-01
Mass failure is one of the most common failures of low-volume roads in mountainous terrain. Current methods for evaluating stability of these roads require a geotechnical specialist. A stability analysis program, XSTABL, was used to estimate the stability of 3,696 combinations of road geometry, soil, and groundwater conditions. A sensitivity analysis was carried out to...
A comparison of analysis methods to estimate contingency strength.
Lloyd, Blair P; Staubitz, Johanna L; Tapp, Jon T
2018-05-09
To date, several data analysis methods have been used to estimate contingency strength, yet few studies have compared these methods directly. To compare the relative precision and sensitivity of four analysis methods (i.e., exhaustive event-based, nonexhaustive event-based, concurrent interval, concurrent+lag interval), we applied all methods to a simulated data set in which several response-dependent and response-independent schedules of reinforcement were programmed. We evaluated the degree to which contingency strength estimates produced from each method (a) corresponded with expected values for response-dependent schedules and (b) showed sensitivity to parametric manipulations of response-independent reinforcement. Results indicated both event-based methods produced contingency strength estimates that aligned with expected values for response-dependent schedules, but differed in sensitivity to response-independent reinforcement. The precision of interval-based methods varied by analysis method (concurrent vs. concurrent+lag) and schedule type (continuous vs. partial), and showed similar sensitivities to response-independent reinforcement. Recommendations and considerations for measuring contingencies are identified. © 2018 Society for the Experimental Analysis of Behavior.
NASA Technical Reports Server (NTRS)
Park, Nohpill; Reagan, Shawn; Franks, Greg; Jones, William G.
1999-01-01
This paper discusses analytical approaches to evaluating performance of Spacecraft On-Board Computing systems, thereby ultimately achieving a reliable spacecraft data communications systems. The sensitivity analysis approach of memory system on the ProSEDS (Propulsive Small Expendable Deployer System) as a part of its data communication system will be investigated. Also, general issues and possible approaches to reliable Spacecraft On-Board Interconnection Network and Processor Array will be shown. The performance issues of a spacecraft on-board computing systems such as sensitivity, throughput, delay and reliability will be introduced and discussed.
NASA Astrophysics Data System (ADS)
Harshan, Suraj
The main objective of the present thesis is the improvement of the TEB/ISBA (SURFEX) urban land surface model (ULSM) through comprehensive evaluation, sensitivity analysis, and optimization experiments using energy balance and radiative and air temperature data observed during 11 months at a tropical sub-urban site in Singapore. Overall the performance of the model is satisfactory, with a small underestimation of net radiation and an overestimation of sensible heat flux. Weaknesses in predicting the latent heat flux are apparent with smaller model values during daytime and the model also significantly underpredicts both the daytime peak and nighttime storage heat. Surface temperatures of all facets are generally overpredicted. Significant variation exists in the model behaviour between dry and wet seasons. The vegetation parametrization used in the model is inadequate to represent the moisture dynamics, producing unrealistically low latent heat fluxes during a particularly dry period. The comprehensive evaluation of the USLM shows the need for accurate estimation of input parameter values for present site. Since obtaining many of these parameters through empirical methods is not feasible, the present study employed a two step approach aimed at providing information about the most sensitive parameters and an optimized parameter set from model calibration. Two well established sensitivity analysis methods (global: Sobol and local: Morris) and a state-of-the-art multiobjective evolutionary algorithm (Borg) were employed for sensitivity analysis and parameter estimation. Experiments were carried out for three different weather periods. The analysis indicates that roof related parameters are the most important ones in controlling the behaviour of the sensible heat flux and net radiation flux, with roof and road albedo as the most influential parameters. Soil moisture initialization parameters are important in controlling the latent heat flux. The built (town) fraction has a significant influence on all fluxes considered. Comparison between the Sobol and Morris methods shows similar sensitivities, indicating the robustness of the present analysis and that the Morris method can be employed as a computationally cheaper alternative of Sobol's method. Optimization as well as the sensitivity experiments for the three periods (dry, wet and mixed), show a noticeable difference in parameter sensitivity and parameter convergence, indicating inadequacies in model formulation. Existence of a significant proportion of less sensitive parameters might be indicating an over-parametrized model. Borg MOEA showed great promise in optimizing the input parameters set. The optimized model modified using the site specific values for thermal roughness length parametrization shows an improvement in the performances of outgoing longwave radiation flux, overall surface temperature, heat storage flux and sensible heat flux.
Multiobjective Sensitivity Analysis Of Sediment And Nitrogen Processes With A Watershed Model
This paper presents a computational analysis for evaluating critical non-point-source sediment and nutrient (specifically nitrogen) processes and management actions at the watershed scale. In the analysis, model parameters that bear key uncertainties were presumed to reflect the ...
Zhang, Lifan; Shi, Xiaochun; Zhang, Yueqiu; Zhang, Yao; Huo, Feifei; Zhou, Baotong; Deng, Guohua; Liu, Xiaoqing
2017-08-10
T-SPOT.TB didn't perform a perfect diagnosis for active tuberculosis (ATB), and some factors may influence the results. We did this study to evaluate possible factors associated with the sensitivity and specificity of T-SPOT.TB, and the diagnostic parameters under varied conditions. Patients with suspected ATB were enrolled prospectively. Influencing factors of the sensitivity and specificity of T-SPOT.TB were evaluated using logistic regression models. Sensitivity, specificity, predictive values (PV), and likelihood ratios (LR) were calculated with consideration of relevant factors. Of the 865 participants, 205 (23.7%) had ATB, including 58 (28.3%) microbiologically confirmed TB and 147 (71.7%) clinically diagnosed TB. 615 (71.7%) were non-TB. 45 (5.2%) cases were clinically indeterminate and excluded from the final analysis. In multivariate analysis, serous effusion was the only independent risk factor related to lower sensitivity (OR = 0.39, 95% CI: 0.18-0.81) among patients with ATB. Among non-TB patients, age, TB history, immunosuppressive agents/glucocorticoid treatment and lymphocyte count were the independent risk factors related to specificity of T-SPOT.TB. Sensitivity, specificity, PV+, PV-, LR+ and LR- of T-SPOT.TB for diagnosis of ATB were 78.5%, 74.1%, 50.3%, 91.2%, 3.0 and 0.3, respectively. This study suggests that influencing factors of sensitivity and specificity of T-SPOT.TB should be considered for interpretation of T-SPOT.TB results.
ERIC Educational Resources Information Center
Metzger, Isha; Cooper, Shauna M.; Zarrett, Nicole; Flory, Kate
2013-01-01
The current review conducted a systematic assessment of culturally sensitive risk prevention programs for African American adolescents. Prevention programs meeting the inclusion and exclusion criteria were evaluated across several domains: (1) theoretical orientation and foundation; (2) methodological rigor; (3) level of cultural integration; (4)…
Matrix population models are often used to extrapolate from life stage-specific stressor effects on survival and reproduction to population-level effects. Demographic elasticity analysis of a matrix model allows an evaluation of the relative sensitivity of population growth rate ...
NASA Astrophysics Data System (ADS)
Ye, M.; Chen, Z.; Shi, L.; Zhu, Y.; Yang, J.
2017-12-01
Nitrogen reactive transport modeling is subject to uncertainty in model parameters, structures, and scenarios. While global sensitivity analysis is a vital tool for identifying the parameters important to nitrogen reactive transport, conventional global sensitivity analysis only considers parametric uncertainty. This may result in inaccurate selection of important parameters, because parameter importance may vary under different models and modeling scenarios. By using a recently developed variance-based global sensitivity analysis method, this paper identifies important parameters with simultaneous consideration of parametric uncertainty, model uncertainty, and scenario uncertainty. In a numerical example of nitrogen reactive transport modeling, a combination of three scenarios of soil temperature and two scenarios of soil moisture leads to a total of six scenarios. Four alternative models are used to evaluate reduction functions used for calculating actual rates of nitrification and denitrification. The model uncertainty is tangled with scenario uncertainty, as the reduction functions depend on soil temperature and moisture content. The results of sensitivity analysis show that parameter importance varies substantially between different models and modeling scenarios, which may lead to inaccurate selection of important parameters if model and scenario uncertainties are not considered. This problem is avoided by using the new method of sensitivity analysis in the context of model averaging and scenario averaging. The new method of sensitivity analysis can be applied to other problems of contaminant transport modeling when model uncertainty and/or scenario uncertainty are present.
Fish oil supplementation and insulin sensitivity: a systematic review and meta-analysis.
Gao, Huanqing; Geng, Tingting; Huang, Tao; Zhao, Qinghua
2017-07-03
Fish oil supplementation has been shown to be associated with a lower risk of metabolic syndrome and benefit a wide range of chronic diseases, such as cardiovascular disease, type 2 diabetes and several types of cancers. However, the evidence of fish oil supplementation on glucose metabolism and insulin sensitivity is still controversial. This meta-analysis summarized the exist evidence of the relationship between fish oil supplementation and insulin sensitivity and aimed to evaluate whether fish oil supplementation could improve insulin sensitivity. We searched the Cochrane Library, PubMed, Embase database for the relevant studies update to Dec 2016. Two researchers screened the literature independently by the selection and exclusion criteria. Studies were pooled using random effect models to estimate a pooled SMD and corresponding 95% CI. This meta-analysis was performed by Stata 13.1 software. A total of 17 studies with 672 participants were included in this meta-analysis study after screening from 498 published articles found after the initial search. In a pooled analysis, fish oil supplementation had no effects on insulin sensitivity compared with the placebo (SMD 0.17, 95%CI -0.15 to 0.48, p = 0.292). In subgroup analysis, fish oil supplementation could benefit insulin sensitivity among people who were experiencing at least one symptom of metabolic disorders (SMD 0.53, 95% CI 0.17 to 0.88, p < 0.001). Similarly, there were no significant differences between subgroups of methods of insulin sensitivity, doses of omega-3 polyunsaturated fatty acids (n-3 PUFA) of fish oil supplementation or duration of the intervention. The sensitivity analysis indicated that the results were robust. Short-term fish oil supplementation is associated with increasing the insulin sensitivity among those people with metabolic disorders.
NASA Astrophysics Data System (ADS)
Chu, J.; Zhang, C.; Fu, G.; Li, Y.; Zhou, H.
2015-08-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed method dramatically reduces the computational demands required for attaining high-quality approximations of optimal trade-off relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed dimension reduction and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform dimension reduction of optimization problems when solving complex multi-objective reservoir operation problems.
NASA Astrophysics Data System (ADS)
Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.
2015-04-01
This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.
Probabilistic structural analysis of a truss typical for space station
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.
1990-01-01
A three-bay, space, cantilever truss is probabilistically evaluated using the computer code NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) to identify and quantify the uncertainties and respective sensitivities associated with corresponding uncertainties in the primitive variables (structural, material, and loads parameters) that defines the truss. The distribution of each of these primitive variables is described in terms of one of several available distributions such as the Weibull, exponential, normal, log-normal, etc. The cumulative distribution function (CDF's) for the response functions considered and sensitivities associated with the primitive variables for given response are investigated. These sensitivities help in determining the dominating primitive variables for that response.
Field Evaluation of the Pedostructure-Based Model (Kamel®)
USDA-ARS?s Scientific Manuscript database
This study involves a field evaluation of the pedostructure-based model Kamel and comparisons between Kamel and the Hydrus-1D model for predicting profile soil moisture. This paper also presents a sensitivity analysis of Kamel with an evaluation field site used as the base scenario. The field site u...
Levitt, Jacob Oren; Levitt, Barrie H.; Akhavan, Arash; Yanofsky, Howard
2010-01-01
Background. There are relatively few studies published examining the sensitivity and specificity of potassium hydroxide (KOH) smear and fungal culture examination of tinea pedis. Objective. To evaluate the sensitivity and specificity of KOH smear and fungal culture for diagnosing tinea pedis. Methods. A pooled analysis of data from five similarly conducted bioequivalence trials for antifungal drugs was performed. Data from 460 patients enrolled in the vehicle arms of these studies with clinical diagnosis of tinea pedis supported by positive fungal culture were analyzed 6 weeks after initiation of the study to determine the sensitivity and specificity of KOH smear and fungal culture. Results. Using clinical assessment as the gold standard, the sensitivities for KOH smear and culture were 73.3% (95% CI: 66.3 to 79.5%) and 41.7% (34.6 to 49.1%), respectively. The respective specificities for culture and KOH smear were 77.7% (72.2 to 82.5%) and 42.5% (36.6 to 48.6%). Conclusion. KOH smear and fungal culture are complementary diagnostic tests for tinea pedis, with the former being the more sensitive test of the two, and the latter being more specific. PMID:20672004
Shuttle cryogenic supply system optimization study. Volume 1: Management supply, sections 1 - 3
NASA Technical Reports Server (NTRS)
1973-01-01
An analysis of the cryogenic supply system for use on space shuttle vehicles was conducted. The major outputs of the analysis are: (1) evaluations of subsystem and integrated system concepts, (2) selection of representative designs, (3) parametric data and sensitivity studies, (4) evaluation of cryogenic cooling in environmental control subsystems, and (5) development of mathematical model.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
NASA Astrophysics Data System (ADS)
Safaei, S.; Haghnegahdar, A.; Razavi, S.
2016-12-01
Complex environmental models are now the primary tool to inform decision makers for the current or future management of environmental resources under the climate and environmental changes. These complex models often contain a large number of parameters that need to be determined by a computationally intensive calibration procedure. Sensitivity analysis (SA) is a very useful tool that not only allows for understanding the model behavior, but also helps in reducing the number of calibration parameters by identifying unimportant ones. The issue is that most global sensitivity techniques are highly computationally demanding themselves for generating robust and stable sensitivity metrics over the entire model response surface. Recently, a novel global sensitivity analysis method, Variogram Analysis of Response Surfaces (VARS), is introduced that can efficiently provide a comprehensive assessment of global sensitivity using the Variogram concept. In this work, we aim to evaluate the effectiveness of this highly efficient GSA method in saving computational burden, when applied to systems with extra-large number of input factors ( 100). We use a test function and a hydrological modelling case study to demonstrate the capability of VARS method in reducing problem dimensionality by identifying important vs unimportant input factors.
Garway-Heath, David F; Quartilho, Ana; Prah, Philip; Crabb, David P; Cheng, Qian; Zhu, Haogang
2017-08-01
To evaluate the ability of various visual field (VF) analysis methods to discriminate treatment groups in glaucoma clinical trials and establish the value of time-domain optical coherence tomography (TD OCT) imaging as an additional outcome. VFs and retinal nerve fibre layer thickness (RNFLT) measurements (acquired by TD OCT) from 373 glaucoma patients in the UK Glaucoma Treatment Study (UKGTS) at up to 11 scheduled visits over a 2 year interval formed the cohort to assess the sensitivity of progression analysis methods. Specificity was assessed in 78 glaucoma patients with up to 11 repeated VF and OCT RNFLT measurements over a 3 month interval. Growth curve models assessed the difference in VF and RNFLT rate of change between treatment groups. Incident progression was identified by 3 VF-based methods: Guided Progression Analysis (GPA), 'ANSWERS' and 'PoPLR', and one based on VFs and RNFLT: 'sANSWERS'. Sensitivity, specificity and discrimination between treatment groups were evaluated. The rate of VF change was significantly faster in the placebo, compared to active treatment, group (-0.29 vs +0.03 dB/year, P <.001); the rate of RNFLT change was not different (-1.7 vs -1.1 dB/year, P =.14). After 18 months and at 95% specificity, the sensitivity of ANSWERS and PoPLR was similar (35%); sANSWERS achieved a sensitivity of 70%. GPA, ANSWERS and PoPLR discriminated treatment groups with similar statistical significance; sANSWERS did not discriminate treatment groups. Although the VF progression-detection method including VF and RNFLT measurements is more sensitive, it does not improve discrimination between treatment arms.
Automatic burst detection for the EEG of the preterm infant.
Jennekens, Ward; Ruijs, Loes S; Lommen, Charlotte M L; Niemarkt, Hendrik J; Pasman, Jaco W; van Kranen-Mastenbroek, Vivianne H J M; Wijn, Pieter F F; van Pul, Carola; Andriessen, Peter
2011-10-01
To aid with prognosis and stratification of clinical treatment for preterm infants, a method for automated detection of bursts, interburst-intervals (IBIs) and continuous patterns in the electroencephalogram (EEG) is developed. Results are evaluated for preterm infants with normal neurological follow-up at 2 years. The detection algorithm (MATLAB®) for burst, IBI and continuous pattern is based on selection by amplitude, time span, number of channels and numbers of active electrodes. Annotations of two neurophysiologists were used to determine threshold values. The training set consisted of EEG recordings of four preterm infants with postmenstrual age (PMA, gestational age + postnatal age) of 29-34 weeks. Optimal threshold values were based on overall highest sensitivity. For evaluation, both observers verified detections in an independent dataset of four EEG recordings with comparable PMA. Algorithm performance was assessed by calculation of sensitivity and positive predictive value. The results of algorithm evaluation are as follows: sensitivity values of 90% ± 6%, 80% ± 9% and 97% ± 5% for burst, IBI and continuous patterns, respectively. Corresponding positive predictive values were 88% ± 8%, 96% ± 3% and 85% ± 15%, respectively. In conclusion, the algorithm showed high sensitivity and positive predictive values for bursts, IBIs and continuous patterns in preterm EEG. Computer-assisted analysis of EEG may allow objective and reproducible analysis for clinical treatment.
Evaluation of Visual Field Progression in Glaucoma: Quasar Regression Program and Event Analysis.
Díaz-Alemán, Valentín T; González-Hernández, Marta; Perera-Sanz, Daniel; Armas-Domínguez, Karintia
2016-01-01
To determine the sensitivity, specificity and agreement between the Quasar program, glaucoma progression analysis (GPA II) event analysis and expert opinion in the detection of glaucomatous progression. The Quasar program is based on linear regression analysis of both mean defect (MD) and pattern standard deviation (PSD). Each series of visual fields was evaluated by three methods; Quasar, GPA II and four experts. The sensitivity, specificity and agreement (kappa) for each method was calculated, using expert opinion as the reference standard. The study included 439 SITA Standard visual fields of 56 eyes of 42 patients, with a mean of 7.8 ± 0.8 visual fields per eye. When suspected cases of progression were considered stable, sensitivity and specificity of Quasar, GPA II and the experts were 86.6% and 70.7%, 26.6% and 95.1%, and 86.6% and 92.6% respectively. When suspected cases of progression were considered as progressing, sensitivity and specificity of Quasar, GPA II and the experts were 79.1% and 81.2%, 45.8% and 90.6%, and 85.4% and 90.6% respectively. The agreement between Quasar and GPA II when suspected cases were considered stable or progressing was 0.03 and 0.28 respectively. The degree of agreement between Quasar and the experts when suspected cases were considered stable or progressing was 0.472 and 0.507. The degree of agreement between GPA II and the experts when suspected cases were considered stable or progressing was 0.262 and 0.342. The combination of MD and PSD regression analysis in the Quasar program showed better agreement with the experts and higher sensitivity than GPA II.
Sensitivity analysis of a ground-water-flow model
Torak, Lynn J.; ,
1991-01-01
A sensitivity analysis was performed on 18 hydrological factors affecting steady-state groundwater flow in the Upper Floridan aquifer near Albany, southwestern Georgia. Computations were based on a calibrated, two-dimensional, finite-element digital model of the stream-aquifer system and the corresponding data inputs. Flow-system sensitivity was analyzed by computing water-level residuals obtained from simulations involving individual changes to each hydrological factor. Hydrological factors to which computed water levels were most sensitive were those that produced the largest change in the sum-of-squares of residuals for the smallest change in factor value. Plots of the sum-of-squares of residuals against multiplier or additive values that effect change in the hydrological factors are used to evaluate the influence of each factor on the simulated flow system. The shapes of these 'sensitivity curves' indicate the importance of each hydrological factor to the flow system. Because the sensitivity analysis can be performed during the preliminary phase of a water-resource investigation, it can be used to identify the types of hydrological data required to accurately characterize the flow system prior to collecting additional data or making management decisions.
Noninvasive and cost-effective trapping method for monitoring sensitive mammal populations
Stephanie E. Trapp; Elizabeth A. Flaherty
2017-01-01
Noninvasive sampling methods provide a means to monitor endangered, threatened, or sensitive species or populations while increasing the efficacy of personnel effort and time. We developed a monitoring protocol that utilizes single-capture hair snares and analysis of morphological features of hair for evaluating populations. During 2015, we used the West Virginia...
Iguchi, Hiroyoshi; Wada, Tadashi; Matsushita, Naoki; Oishi, Masahiro; Teranishi, Yuichi; Yamane, Hideo
2014-07-01
The accuracy and sensitivity of fine-needle aspiration cytology (FNAC) in this analysis were not satisfactory, and the false-negative rate seemed to be higher than for parotid tumours. The possibility of low-grade malignancy should be considered in the surgical treatment of accessory parotid gland (APG) tumours, even if the preoperative results of FNAC suggest that the tumour is benign. Little is known about the usefulness of FNAC in the preoperative evaluation of APG tumours, probably due to the paucity of APG tumour cases. We examined the usefulness of FNAC in the detection of malignant APG tumours. We conducted a retrospective analysis of 3 cases from our hospital, along with 18 previously reported Japanese cases. We compared the preoperative FNAC results with postoperative histopathological diagnoses of APG tumours and evaluated the accuracy, sensitivity, specificity and false-negative rates of FNAC in detecting malignant APG tumours. There were four false-negative cases (19.0%), three of mucoepidermoid carcinomas and one of malignant lymphoma. One false-positive result was noted in the case of a myoepithelioma, which was cytologically diagnosed as suspected adenoid cystic carcinoma. The accuracy, sensitivity and specificity of FNAC in detecting malignant tumours were 76.2%, 60.0% and 90.9%, respectively.
A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application
NASA Astrophysics Data System (ADS)
Razavi, Saman; Gupta, Hoshin V.
2016-01-01
Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.
Passiglia, Francesco; Rizzo, Sergio; Rolfo, Christian; Galvano, Antonio; Bronte, Enrico; Incorvaia, Lorena; Listi, Angela; Barraco, Nadia; Castiglia, Marta; Calo, Valentina; Bazan, Viviana; Russo, Antonio
2018-03-08
Recent studies evaluated the diagnostic accuracy of circulating tumor DNA (ctDNA) in the detection of epidermal growth factor receptor (EGFR) mutations from plasma of NSCLC patients, overall showing a high concordance as compared to standard tissue genotyping. However it is less clear if the location of metastatic site may influence the ability to identify EGFR mutations in plasma. This pooled analysis aims to evaluate the association between the metastatic site location and the sensitivity of ctDNA analysis in detecting EGFR mutations in NSCLC patients. Data from all published studies, evaluating the sensitivity of plasma-based EGFR-mutation testing, stratified by metastatic site location (extrathoracic (M1b) vs intrathoracic (M1a)) were collected by searching in PubMed, Cochrane Library, American Society of Clinical Oncology, and World Conference of Lung Cancer, meeting proceedings. Pooled Odds ratio (OR) and 95% confidence intervals (95% CIs) were calculated for the ctDNA analysis sensitivity, according to metastatic site location. A total of ten studies, with 1425 patients, were eligible. Pooled analysis showed that the sensitivity of ctDNA-based EGFR-mutation testing is significantly higher in patients with M1b vs M1a disease (OR: 5.09; 95% CIs: 2.93 - 8.84). A significant association was observed for both EGFR-activating (OR: 4.30, 95% CI: 2.35-7.88) and resistant T790M mutations (OR: 11.89, 95% CI: 1.45-97.22), regardless of the use of digital-PCR (OR: 5.85, 95% CI: 3.56-9.60) or non-digital PCR technologies (OR: 2.96, 95% CI: 2.24-3.91). These data suggest that the location of metastatic sites significantly influences the diagnostic accuracy of ctDNA analysis in detecting EGFR mutations in NSCLC patients. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A wideband FMBEM for 2D acoustic design sensitivity analysis based on direct differentiation method
NASA Astrophysics Data System (ADS)
Chen, Leilei; Zheng, Changjun; Chen, Haibo
2013-09-01
This paper presents a wideband fast multipole boundary element method (FMBEM) for two dimensional acoustic design sensitivity analysis based on the direct differentiation method. The wideband fast multipole method (FMM) formed by combining the original FMM and the diagonal form FMM is used to accelerate the matrix-vector products in the boundary element analysis. The Burton-Miller formulation is used to overcome the fictitious frequency problem when using a single Helmholtz boundary integral equation for exterior boundary-value problems. The strongly singular and hypersingular integrals in the sensitivity equations can be evaluated explicitly and directly by using the piecewise constant discretization. The iterative solver GMRES is applied to accelerate the solution of the linear system of equations. A set of optimal parameters for the wideband FMBEM design sensitivity analysis are obtained by observing the performances of the wideband FMM algorithm in terms of computing time and memory usage. Numerical examples are presented to demonstrate the efficiency and validity of the proposed algorithm.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
JUPITER PROJECT - MERGING INVERSE PROBLEM FORMULATION TECHNOLOGIES
The JUPITER (Joint Universal Parameter IdenTification and Evaluation of Reliability) project seeks to enhance and build on the technology and momentum behind two of the most popular sensitivity analysis, data assessment, calibration, and uncertainty analysis programs used in envi...
Xie, Feng; O'Reilly, Daria; Ferrusi, Ilia L; Blackhouse, Gord; Bowen, James M; Tarride, Jean-Eric; Goeree, Ron
2009-05-01
The aim of this paper is to present an economic evaluation of diagnostic technologies using Helicobacter pylori screening strategies for the prevention of gastric cancer as an illustration. A Markov model was constructed to compare the lifetime cost and effectiveness of 4 potential strategies: no screening, the serology test by enzyme-linked immunosorbent assay (ELISA), the stool antigen test (SAT), and the (13)C-urea breath test (UBT) for the detection of H. pylori among a hypothetical cohort of 10,000 Canadian men aged 35 years. Special parameter consideration included the sensitivity and specificity of each screening strategy, which determined the model structure and treatment regimen. The primary outcome measured was the incremental cost-effectiveness ratio between the screening strategies and the no-screening strategy. Base-case analysis and probabilistic sensitivity analysis were performed using the point estimates of the parameters and Monte Carlo simulations, respectively. Compared with the no-screening strategy in the base-case analysis, the incremental cost-effectiveness ratio was $33,000 per quality-adjusted life-year (QALY) for the ELISA, $29,800 per QALY for the SAT, and $50,400 per QALY for the UBT. The probabilistic sensitivity analysis revealed that the no-screening strategy was more cost effective if the willingness to pay (WTP) was <$20,000 per QALY, while the SAT had the highest probability of being cost effective if the WTP was >$30,000 per QALY. Both the ELISA and the UBT were not cost-effective strategies over a wide range of WTP values. Although the UBT had the highest sensitivity and specificity, either no screening or the SAT could be the most cost-effective strategy depending on the WTP threshold values from an economic perspective. This highlights the importance of economic evaluations of diagnostic technologies.
Review-of-systems questionnaire as a predictive tool for psychogenic nonepileptic seizures.
Robles, Liliana; Chiang, Sharon; Haneef, Zulfi
2015-04-01
Patients with refractory epilepsy undergo video-electroencephalography for seizure characterization, among whom approximately 10-30% will be discharged with the diagnosis of psychogenic nonepileptic seizures (PNESs). Clinical PNES predictors have been described but in general are not sensitive or specific. We evaluated whether multiple complaints in a routine review-of-system (ROS) questionnaire could serve as a sensitive and specific marker of PNESs. We performed a retrospective analysis of a standardized ROS questionnaire completed by patients with definite PNESs and epileptic seizures (ESs) diagnosed in our adult epilepsy monitoring unit. A multivariate analysis of covariance (MANCOVA) was used to determine whether groups with PNES and ES differed with respect to the percentage of complaints in the ROS questionnaire. Tenfold cross-validation was used to evaluate the predictive error of a logistic regression classifier for PNES status based on the percentage of positive complaints in the ROS questionnaire. A total of 44 patients were included for analysis. Patients with PNESs had a significantly higher number of complaints in the ROS questionnaire compared to patients with epilepsy. A threshold of 17% positive complaints achieved a 78% specificity and 85% sensitivity for discriminating between PNESs and ESs. We conclude that the routine ROS questionnaire may be a sensitive and specific predictive tool for discriminating between PNESs and ESs. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Muldoon, Timothy J.; Thekkek, Nadhi; Roblyer, Darren; Maru, Dipen; Harpaz, Noam; Potack, Jonathan; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca
2010-03-01
Early detection of neoplasia in patients with Barrett's esophagus is essential to improve outcomes. The aim of this ex vivo study was to evaluate the ability of high-resolution microendoscopic imaging and quantitative image analysis to identify neoplastic lesions in patients with Barrett's esophagus. Nine patients with pathologically confirmed Barrett's esophagus underwent endoscopic examination with biopsies or endoscopic mucosal resection. Resected fresh tissue was imaged with fiber bundle microendoscopy; images were analyzed by visual interpretation or by quantitative image analysis to predict whether the imaged sites were non-neoplastic or neoplastic. The best performing pair of quantitative features were chosen based on their ability to correctly classify the data into the two groups. Predictions were compared to the gold standard of histopathology. Subjective analysis of the images by expert clinicians achieved average sensitivity and specificity of 87% and 61%, respectively. The best performing quantitative classification algorithm relied on two image textural features and achieved a sensitivity and specificity of 87% and 85%, respectively. This ex vivo pilot trial demonstrates that quantitative analysis of images obtained with a simple microendoscope system can distinguish neoplasia in Barrett's esophagus with good sensitivity and specificity when compared to histopathology and to subjective image interpretation.
NASA Astrophysics Data System (ADS)
Paul, M.; Negahban-Azar, M.
2017-12-01
The hydrologic models usually need to be calibrated against observed streamflow at the outlet of a particular drainage area through a careful model calibration. However, a large number of parameters are required to fit in the model due to their unavailability of the field measurement. Therefore, it is difficult to calibrate the model for a large number of potential uncertain model parameters. This even becomes more challenging if the model is for a large watershed with multiple land uses and various geophysical characteristics. Sensitivity analysis (SA) can be used as a tool to identify most sensitive model parameters which affect the calibrated model performance. There are many different calibration and uncertainty analysis algorithms which can be performed with different objective functions. By incorporating sensitive parameters in streamflow simulation, effects of the suitable algorithm in improving model performance can be demonstrated by the Soil and Water Assessment Tool (SWAT) modeling. In this study, the SWAT was applied in the San Joaquin Watershed in California covering 19704 km2 to calibrate the daily streamflow. Recently, sever water stress escalating due to intensified climate variability, prolonged drought and depleting groundwater for agricultural irrigation in this watershed. Therefore it is important to perform a proper uncertainty analysis given the uncertainties inherent in hydrologic modeling to predict the spatial and temporal variation of the hydrologic process to evaluate the impacts of different hydrologic variables. The purpose of this study was to evaluate the sensitivity and uncertainty of the calibrated parameters for predicting streamflow. To evaluate the sensitivity of the calibrated parameters three different optimization algorithms (Sequential Uncertainty Fitting- SUFI-2, Generalized Likelihood Uncertainty Estimation- GLUE and Parameter Solution- ParaSol) were used with four different objective functions (coefficient of determination- r2, Nash-Sutcliffe efficiency- NSE, percent bias- PBIAS, and Kling-Gupta efficiency- KGE). The preliminary results showed that using the SUFI-2 algorithm with the objective function NSE and KGE has improved significantly the calibration (e.g. R2 and NSE is found 0.52 and 0.47 respectively for daily streamflow calibration).
Evaluation and Sensitivity Analysis of an Ocean Model Response to Hurricane Ivan (PREPRINT)
2009-05-18
analysis of upper-limb meridional overturning circulation interior ocean pathways in the tropical/subtropical Atlantic . In: Interhemispheric Water...diminishing returns are encountered when either resolution is increased. 3 1. Introduction Coupled ocean-atmosphere general circulation models have become...northwest Caribbean Sea 4 and GOM. Evaluation is difficult because ocean general circulation models incorporate a large suite of numerical algorithms
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Ken
2007-01-01
This viewgraph presentation reviews the selection of the optimum Field Programmable Gate Arrays (FPGA) for space missions. Included in this review is a discussion on differentiating amongst various FPGAs, cost analysis of the various options, the investigation of radiation effects, an expansion of the evaluation criteria, and the application of the evaluation criteria to the selection process.
Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camins, I.; Shinn, J.H.
We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive amore » measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab.« less
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
SENSITIVITY ANALYSIS OF THE USEPA WINS PM 2.5 SEPARATOR
Factors affecting the performance of the US EPA WINS PM2.5 separator have been systematically evaluated. In conjunction with the separator's laboratory calibrated penetration curve, analysis of the governing equation that describes conventional impactor performance was used to ...
Validity and consistency assessment of accident analysis methods in the petroleum industry.
Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza
2017-11-17
Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.
Health economic assessment: a methodological primer.
Simoens, Steven
2009-12-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments.
Health Economic Assessment: A Methodological Primer
Simoens, Steven
2009-01-01
This review article aims to provide an introduction to the methodology of health economic assessment of a health technology. Attention is paid to defining the fundamental concepts and terms that are relevant to health economic assessments. The article describes the methodology underlying a cost study (identification, measurement and valuation of resource use, calculation of costs), an economic evaluation (type of economic evaluation, the cost-effectiveness plane, trial- and model-based economic evaluation, discounting, sensitivity analysis, incremental analysis), and a budget impact analysis. Key references are provided for those readers who wish a more advanced understanding of health economic assessments. PMID:20049237
Rainfall-induced fecal indicator organisms transport from manured fields: Model sensitivity analysis
Microbial quality of surface waters attracts attention due to food- and waterborne disease outbreaks. Fecal indicator organisms (FIOs) are commonly used for the microbial pollution level evaluation. Models predicting the fate and transport of FIOs are required to design and evalu...
Sensitivity analysis of static resistance of slender beam under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valeš, Jan
2016-06-08
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
Wang, Ming; Long, Qi
2016-09-01
Prediction models for disease risk and prognosis play an important role in biomedical research, and evaluating their predictive accuracy in the presence of censored data is of substantial interest. The standard concordance (c) statistic has been extended to provide a summary measure of predictive accuracy for survival models. Motivated by a prostate cancer study, we address several issues associated with evaluating survival prediction models based on c-statistic with a focus on estimators using the technique of inverse probability of censoring weighting (IPCW). Compared to the existing work, we provide complete results on the asymptotic properties of the IPCW estimators under the assumption of coarsening at random (CAR), and propose a sensitivity analysis under the mechanism of noncoarsening at random (NCAR). In addition, we extend the IPCW approach as well as the sensitivity analysis to high-dimensional settings. The predictive accuracy of prediction models for cancer recurrence after prostatectomy is assessed by applying the proposed approaches. We find that the estimated predictive accuracy for the models in consideration is sensitive to NCAR assumption, and thus identify the best predictive model. Finally, we further evaluate the performance of the proposed methods in both settings of low-dimensional and high-dimensional data under CAR and NCAR through simulations. © 2016, The International Biometric Society.
Stochastic sensitivity measure for mistuned high-performance turbines
NASA Technical Reports Server (NTRS)
Murthy, Durbha V.; Pierre, Christophe
1992-01-01
A stochastic measure of sensitivity is developed in order to predict the effects of small random blade mistuning on the dynamic aeroelastic response of turbomachinery blade assemblies. This sensitivity measure is based solely on the nominal system design (i.e., on tuned system information), which makes it extremely easy and inexpensive to calculate. The measure has the potential to become a valuable design tool that will enable designers to evaluate mistuning effects at a preliminary design stage and thus assess the need for a full mistuned rotor analysis. The predictive capability of the sensitivity measure is illustrated by examining the effects of mistuning on the aeroelastic modes of the first stage of the oxidizer turbopump in the Space Shuttle Main Engine. Results from a full analysis mistuned systems confirm that the simple stochastic sensitivity measure predicts consistently the drastic changes due to misturning and the localization of aeroelastic vibration to a few blades.
First status report on regional ground-water flow modeling for the Paradox Basin, Utah
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, R.W.
1984-05-01
Regional ground-water flow within the principal hydrogeologic units of the Paradox Basin is evaluated by developing a conceptual model of the flow regime in the shallow aquifers and the deep-basin brine aquifers and testing these models using a three-dimensional, finite-difference flow code. Semiquantitative sensitivity analysis (a limited parametric study) is conducted to define the system response to changes in hydrologic properties or boundary conditions. A direct method for sensitivity analysis using an adjoint form of the flow equation is applied to the conceptualized flow regime in the Leadville limestone aquifer. All steps leading to the final results and conclusions aremore » incorporated in this report. The available data utilized in this study is summarized. The specific conceptual models, defining the areal and vertical averaging of litho-logic units, aquifer properties, fluid properties, and hydrologic boundary conditions, are described in detail. Two models were evaluated in this study: a regional model encompassing the hydrogeologic units above and below the Paradox Formation/Hermosa Group and a refined scale model which incorporated only the post Paradox strata. The results are delineated by the simulated potentiometric surfaces and tables summarizing areal and vertical boundary fluxes, Darcy velocities at specific points, and ground-water travel paths. Results from the adjoint sensitivity analysis include importance functions and sensitivity coefficients, using heads or the average Darcy velocities to represent system response. The reported work is the first stage of an ongoing evaluation of the Gibson Dome area within the Paradox Basin as a potential repository for high-level radioactive wastes.« less
Estrada, José M; Kraakman, N J R Bart; Lebrero, Raquel; Muñoz, Raúl
2012-01-01
The sensitivity of the economics of the five most commonly applied odour abatement technologies (biofiltration, biotrickling filtration, activated carbon adsorption, chemical scrubbing and a hybrid technology consisting of a biotrickling filter coupled with carbon adsorption) towards design parameters and commodity prices was evaluated. Besides, the influence of the geographical location on the Net Present Value calculated for a 20 years lifespan (NPV20) of each technology and its robustness towards typical process fluctuations and operational upsets were also assessed. This comparative analysis showed that biological techniques present lower operating costs (up to 6 times) and lower sensitivity than their physical/chemical counterparts, with the packing material being the key parameter affecting their operating costs (40-50% of the total operating costs). The use of recycled or partially treated water (e.g. secondary effluent in wastewater treatment plants) offers an opportunity to significantly reduce costs in biological techniques. Physical/chemical technologies present a high sensitivity towards H2S concentration, which is an important drawback due to the fluctuating nature of malodorous emissions. The geographical analysis evidenced high NPV20 variations around the world for all the technologies evaluated, but despite the differences in wage and price levels, biofiltration and biotrickling filtration are always the most cost-efficient alternatives (NPV20). When, in an economical evaluation, the robustness is as relevant as the overall costs (NPV20), the hybrid technology would move up next to BTF as the most preferred technologies. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Smathers, J. B.; Kuykendall, W. E., Jr.; Wright, R. E., Jr.; Marshall, J. R.
1973-01-01
Radioisotope measurement techniques and neutron activation analysis are evaluated for use in identifying and locating contamination sources in space environment simulation chambers. The alpha range method allows the determination of total contaminant concentration in vapor state and condensate state. A Cf-252 neutron activation analysis system for detecting oils and greases tagged with stable elements is described. While neutron activation analysis of tagged contaminants offers specificity, an on-site system is extremely costly to implement and provides only marginal detection sensitivity under even the most favorable conditions.
Source apportionment and sensitivity analysis: two methodologies with two different purposes
NASA Astrophysics Data System (ADS)
Clappier, Alain; Belis, Claudio A.; Pernigotti, Denise; Thunis, Philippe
2017-11-01
This work reviews the existing methodologies for source apportionment and sensitivity analysis to identify key differences and stress their implicit limitations. The emphasis is laid on the differences between source impacts
(sensitivity analysis) and contributions
(source apportionment) obtained by using four different methodologies: brute-force top-down, brute-force bottom-up, tagged species and decoupled direct method (DDM). A simple theoretical example to compare these approaches is used highlighting differences and potential implications for policy. When the relationships between concentration and emissions are linear, impacts and contributions are equivalent concepts. In this case, source apportionment and sensitivity analysis may be used indifferently for both air quality planning purposes and quantifying source contributions. However, this study demonstrates that when the relationship between emissions and concentrations is nonlinear, sensitivity approaches are not suitable to retrieve source contributions and source apportionment methods are not appropriate to evaluate the impact of abatement strategies. A quantification of the potential nonlinearities should therefore be the first step prior to source apportionment or planning applications, to prevent any limitations in their use. When nonlinearity is mild, these limitations may, however, be acceptable in the context of the other uncertainties inherent to complex models. Moreover, when using sensitivity analysis for planning, it is important to note that, under nonlinear circumstances, the calculated impacts will only provide information for the exact conditions (e.g. emission reduction share) that are simulated.
Arkusz, Joanna; Stępnik, Maciej; Sobala, Wojciech; Dastych, Jarosław
2010-11-10
The aim of this study was to find differentially regulated genes in THP-1 monocytic cells exposed to sensitizers and nonsensitizers and to investigate if such genes could be reliable markers for an in vitro predictive method for the identification of skin sensitizing chemicals. Changes in expression of 35 genes in the THP-1 cell line following treatment with chemicals of different sensitizing potential (from nonsensitizers to extreme sensitizers) were assessed using real-time PCR. Verification of 13 candidate genes by testing a large number of chemicals (an additional 22 sensitizers and 8 nonsensitizers) revealed that prediction of contact sensitization potential was possible based on evaluation of changes in three genes: IL8, HMOX1 and PAIMP1. In total, changes in expression of these genes allowed correct detection of sensitization potential of 21 out of 27 (78%) test sensitizers. The gene expression levels inside potency groups varied and did not allow estimation of sensitization potency of test chemicals. Results of this study indicate that evaluation of changes in expression of proposed biomarkers in THP-1 cells could be a valuable model for preliminary screening of chemicals to discriminate an appreciable majority of sensitizers from nonsensitizers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Instrument performance of a radon measuring system with the alpha-track detection technique.
Tokonami, S; Zhuo, W; Ryuo, H; Yonehara, H; Yamada, Y; Shimo, M
2003-01-01
An instrument performance test has been carried out for a radon measuring system made in Hungary. The system measures radon using the alpha-track detection technique. It consists of three parts: the passive detector, the etching unit and the evaluation unit. A CR-39 detector is used as the radiation detector. Alpha-track reading and data analysis are carried out after chemical etching. The following subjects were examined in the present study: (1) radon sensitivity, (2) performance of etching and evaluation processes and (3) thoron sensitivity. The radon sensitivity of 6.9 x 10(-4) mm(-2) (Bq m(-3) d)(-1) was acceptable for practical application. The thoron sensitivity was estimated to be as low as 3.3 x 10(-5) mm(-2) (Bq m(-3) d)(-1) from the experimental study.
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-09-02
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds.
Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.
Wahlström, Helene; Comin, Arianna; Isaksson, Mats; Deplazes, Peter
2016-01-01
A semi-automated magnetic capture probe-based DNA extraction and real-time PCR method (MC-PCR), allowing for a more efficient large-scale surveillance of Echinococcus multilocularis occurrence, has been developed. The test sensitivity has previously been evaluated using the sedimentation and counting technique (SCT) as a gold standard. However, as the sensitivity of the SCT is not 1, test characteristics of the MC-PCR was also evaluated using latent class analysis, a methodology not requiring a gold standard. Test results, MC-PCR and SCT, from a previous evaluation of the MC-PCR using 177 foxes shot in the spring (n=108) and autumn 2012 (n=69) in high prevalence areas in Switzerland were used. Latent class analysis was used to estimate the test characteristics of the MC-PCR. Although it is not the primary aim of this study, estimates of the test characteristics of the SCT were also obtained. This study showed that the sensitivity of the MC-PCR was 0.88 [95% posterior credible interval (PCI) 0.80-0.93], which was not significantly different than the SCT, 0.83 (95% PCI 0.76-0.88), which is currently considered as the gold standard. The specificity of both tests was high, 0.98 (95% PCI 0.94-0.99) for the MC-PCR and 0.99 (95% PCI 0.99-1) for the SCT. In a previous study, using fox scats from a low prevalence area, the specificity of the MC-PCR was higher, 0.999% (95% PCI 0.997-1). One reason for the lower estimate of the specificity in this study could be that the MC-PCR detects DNA from infected but non-infectious rodents eaten by foxes. When using MC-PCR in low prevalence areas or areas free from the parasite, a positive result in the MC-PCR should be regarded as a true positive. The sensitivity of the MC-PCR (0.88) was comparable to the sensitivity of SCT (0.83).
NASA Astrophysics Data System (ADS)
Demaria, Eleonora M.; Nijssen, Bart; Wagener, Thorsten
2007-06-01
Current land surface models use increasingly complex descriptions of the processes that they represent. Increase in complexity is accompanied by an increase in the number of model parameters, many of which cannot be measured directly at large spatial scales. A Monte Carlo framework was used to evaluate the sensitivity and identifiability of ten parameters controlling surface and subsurface runoff generation in the Variable Infiltration Capacity model (VIC). Using the Monte Carlo Analysis Toolbox (MCAT), parameter sensitivities were studied for four U.S. watersheds along a hydroclimatic gradient, based on a 20-year data set developed for the Model Parameter Estimation Experiment (MOPEX). Results showed that simulated streamflows are sensitive to three parameters when evaluated with different objective functions. Sensitivity of the infiltration parameter (b) and the drainage parameter (exp) were strongly related to the hydroclimatic gradient. The placement of vegetation roots played an important role in the sensitivity of model simulations to the thickness of the second soil layer (thick2). Overparameterization was found in the base flow formulation indicating that a simplified version could be implemented. Parameter sensitivity was more strongly dictated by climatic gradients than by changes in soil properties. Results showed how a complex model can be reduced to a more parsimonious form, leading to a more identifiable model with an increased chance of successful regionalization to ungauged basins. Although parameter sensitivities are strictly valid for VIC, this model is representative of a wider class of macroscale hydrological models. Consequently, the results and methodology will have applicability to other hydrological models.
Lee, Ho-Won; Muniyappa, Ranganath; Yan, Xu; Yue, Lilly Q.; Linden, Ellen H.; Chen, Hui; Hansen, Barbara C.
2011-01-01
The euglycemic glucose clamp is the reference method for assessing insulin sensitivity in humans and animals. However, clamps are ill-suited for large studies because of extensive requirements for cost, time, labor, and technical expertise. Simple surrogate indexes of insulin sensitivity/resistance including quantitative insulin-sensitivity check index (QUICKI) and homeostasis model assessment (HOMA) have been developed and validated in humans. However, validation studies of QUICKI and HOMA in both rats and mice suggest that differences in metabolic physiology between rodents and humans limit their value in rodents. Rhesus monkeys are a species more similar to humans than rodents. Therefore, in the present study, we evaluated data from 199 glucose clamp studies obtained from a large cohort of 86 monkeys with a broad range of insulin sensitivity. Data were used to evaluate simple surrogate indexes of insulin sensitivity/resistance (QUICKI, HOMA, Log HOMA, 1/HOMA, and 1/Fasting insulin) with respect to linear regression, predictive accuracy using a calibration model, and diagnostic performance using receiver operating characteristic. Most surrogates had modest linear correlations with SIClamp (r ≈ 0.4–0.64) with comparable correlation coefficients. Predictive accuracy determined by calibration model analysis demonstrated better predictive accuracy of QUICKI than HOMA and Log HOMA. Receiver operating characteristic analysis showed equivalent sensitivity and specificity of most surrogate indexes to detect insulin resistance. Thus, unlike in rodents but similar to humans, surrogate indexes of insulin sensitivity/resistance including QUICKI and log HOMA may be reasonable to use in large studies of rhesus monkeys where it may be impractical to conduct glucose clamp studies. PMID:21209021
Forreryd, Andy; Johansson, Henrik; Albrekt, Ann-Sofie; Lindstedt, Malin
2014-05-16
Allergic contact dermatitis (ACD) develops upon exposure to certain chemical compounds termed skin sensitizers. To reduce the occurrence of skin sensitizers, chemicals are regularly screened for their capacity to induce sensitization. The recently developed Genomic Allergen Rapid Detection (GARD) assay is an in vitro alternative to animal testing for identification of skin sensitizers, classifying chemicals by evaluating transcriptional levels of a genomic biomarker signature. During assay development and biomarker identification, genome-wide expression analysis was applied using microarrays covering approximately 30,000 transcripts. However, the microarray platform suffers from drawbacks in terms of low sample throughput, high cost per sample and time consuming protocols and is a limiting factor for adaption of GARD into a routine assay for screening of potential sensitizers. With the purpose to simplify assay procedures, improve technical parameters and increase sample throughput, we assessed the performance of three high throughput gene expression platforms--nCounter®, BioMark HD™ and OpenArray®--and correlated their performance metrics against our previously generated microarray data. We measured the levels of 30 transcripts from the GARD biomarker signature across 48 samples. Detection sensitivity, reproducibility, correlations and overall structure of gene expression measurements were compared across platforms. Gene expression data from all of the evaluated platforms could be used to classify most of the sensitizers from non-sensitizers in the GARD assay. Results also showed high data quality and acceptable reproducibility for all platforms but only medium to poor correlations of expression measurements across platforms. In addition, evaluated platforms were superior to the microarray platform in terms of cost efficiency, simplicity of protocols and sample throughput. We evaluated the performance of three non-array based platforms using a limited set of transcripts from the GARD biomarker signature. We demonstrated that it was possible to achieve acceptable discriminatory power in terms of separation between sensitizers and non-sensitizers in the GARD assay while reducing assay costs, simplify assay procedures and increase sample throughput by using an alternative platform, providing a first step towards the goal to prepare GARD for formal validation and adaption of the assay for industrial screening of potential sensitizers.
Wegelin, Olivier; Bartels, Diny W M; Tromp, Ellen; Kuypers, Karel C; van Melick, Harm H E
2015-10-01
To evaluate the effects of cystoscopy on urine cytology and additional cytokeratin-20 (CK-20) staining in patients presenting with gross hematuria. For 83 patients presenting with gross hematuria, spontaneous and instrumented paired urine samples were analyzed. Three patients were excluded. Spontaneous samples were collected within 1 hour before cystoscopy, and the instrumented samples were tapped through the cystoscope. Subsequently, patients underwent cystoscopic evaluation and imaging of the urinary tract. If tumor suspicious lesions were found on cystoscopy or imaging, subjects underwent transurethral resection or ureterorenoscopy. Two blinded uropathological reviewers (DB, KK) evaluated 160 urine samples. Reference standards were results of cystoscopy, imaging, or histopathology. Thirty-seven patients (46.3%) underwent transurethral resection or ureterorenoscopy procedures. In 30 patients (37.5%) tumor presence was confirmed by histopathology. The specificity of urine analysis was significantly higher for spontaneous samples than instrumented samples for both cytology alone (94% vs 72%, P = .01) and for cytology combined with CK-20 analysis (98% vs 84%, P = .02). The difference in sensitivity between spontaneous and instrumented samples was not significant for both cytology alone (40% vs 53%) and combined with CK-20 analysis (67% vs 67%). The addition of CK-20 analysis to cytology significantly increases test sensitivity in spontaneous urine cytology (67% vs 40%, P = .03). Instrumentation significantly decreases specificity of urine cytology. This may lead to unnecessary diagnostic procedures. Additional CK-20 staining in spontaneous urine cytology significantly increases sensitivity but did not improve the already high specificity. We suggest performing urine cytology and CK-20 analysis on spontaneously voided urine. Copyright © 2015 Elsevier Inc. All rights reserved.
Initial evaluation of rectal bleeding in young persons: a cost-effectiveness analysis.
Lewis, James D; Brown, Alphonso; Localio, A Russell; Schwartz, J Sanford
2002-01-15
Evaluation of rectal bleeding in young patients is a frequent diagnostic challenge. To determine the relative cost-effectiveness of alternative diagnostic strategies for young patients with rectal bleeding. Cost-effectiveness analysis using a Markov model. Probability estimates were based on published medical literature. Cost estimates were based on Medicare reimbursement rates and published medical literature. Persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding. The patient's lifetime. Modified societal perspective. Diagnostic strategies included no evaluation, colonoscopy, flexible sigmoidoscopy, barium enema, anoscopy, or any feasible combination of these procedures. Life expectancy and costs. For 35-year-old patients, the no-evaluation strategy yielded the least life expectancy. The incremental cost-effectiveness of flexible sigmoidoscopy compared with no evaluation or with any strategy incorporating anoscopy (followed by further evaluation if no anal disease was found on anoscopy) was less than $5300 per year of life gained. A strategy of flexible sigmoidoscopy plus barium enema yielded the greatest life expectancy, with an incremental cost of $23 918 per additional life-year gained compared with flexible sigmoidoscopy alone. As patient age at presentation of rectal bleeding increased, evaluation of the entire colon became more cost-effective. The incremental cost-effectiveness of flexible sigmoidoscopy plus barium enema compared with colonoscopy was sensitive to estimates of the sensitivity of the tests. In a probabilistic sensitivity analysis comparing flexible sigmoidoscopy with anoscopy followed by flexible sigmoidoscopy if needed, the middle 95th percentile of the distribution of the incremental cost-effectiveness ratios ranged from flexible sigmoidoscopy yielding an increased life expectancy at reduced cost to $52 158 per year of life gained (mean, $11 461 per year of life saved). Evaluation of the colon of persons 25 to 45 years of age with otherwise asymptomatic rectal bleeding increases the life expectancy at a cost comparable to that of colon cancer screening.
NASA Astrophysics Data System (ADS)
Graevskaya, E. E.; Antal, T. K.; Matorin, D. N.; Voronova, E. N.; Pogosyan, S. I.; Rubin, A. B.
2003-05-01
Measurement of chlorophyll fluorescence has been shown to be a rapid, non-invasive, and reliable method to assess photosynthetic performance in a changing environment. In our study, the pulseamplitude-modulation (PAM) - fluorometric method was used to evaluate the sensitivity to chloride mercury and methylmercury chloride of diatomea microalgae Thalassiosira weissflogii. We found that 10^{-6} and 10^{-7} M MeHg led to a slow decrease in the PS II activity following for prolonged lag phase, whereas the algae was not sensitive to the same concentrations of HgCl2. However observed PS II inactivation by methylmercury was not complete and about 10 percents ofthe cells kept the high level of PS II activity as it was shown by microfluorometric analysis. These cells could determine adaptation of algae to methylmercury effect. Both toxicants decreased the rate of PS II reparation, as well as increased a heat pathway of excitation dissipation in PS II antennae complex.
An evaluation of computer-aided disproportionality analysis for post-marketing signal detection.
Lehman, H P; Chen, J; Gould, A L; Kassekert, R; Beninger, P R; Carney, R; Goldberg, M; Goss, M A; Kidos, K; Sharrar, R G; Shields, K; Sweet, A; Wiholm, B E; Honig, P K
2007-08-01
To understand the value of computer-aided disproportionality analysis (DA) in relation to current pharmacovigilance signal detection methods, four products were retrospectively evaluated by applying an empirical Bayes method to Merck's post-marketing safety database. Findings were compared with the prior detection of labeled post-marketing adverse events. Disproportionality ratios (empirical Bayes geometric mean lower 95% bounds for the posterior distribution (EBGM05)) were generated for product-event pairs. Overall (1993-2004 data, EBGM05> or =2, individual terms) results of signal detection using DA compared to standard methods were sensitivity, 31.1%; specificity, 95.3%; and positive predictive value, 19.9%. Using groupings of synonymous labeled terms, sensitivity improved (40.9%). More of the adverse events detected by both methods were detected earlier using DA and grouped (versus individual) terms. With 1939-2004 data, diagnostic properties were similar to those from 1993 to 2004. DA methods using Merck's safety database demonstrate sufficient sensitivity and specificity to be considered for use as an adjunct to conventional signal detection methods.
Coherence Motion Perception in Developmental Dyslexia: A Meta-Analysis of Behavioral Studies
ERIC Educational Resources Information Center
Benassi, Mariagrazia; Simonelli, Letizia; Giovagnoli, Sara; Bolzani, Roberto
2010-01-01
The magnitude of the association between developmental dyslexia (DD) and motion sensitivity is evaluated in 35 studies, which investigated coherence motion perception in DD. A first analysis is conducted on the differences between DD groups and age-matched control (C) groups. In a second analysis, the relationship between motion coherence…
Erickson, Dana; Singh, Ravinder J; Sathananthan, Airani; Vella, Adrian; Bryant, Sandra C
2012-04-01
Late-night salivary cortisol (LNSC) measurements have been increasingly used by physicians as an initial diagnostic test for evaluation of patients with clinical suspicion of Cushing's syndrome (CS). Published studies include various numbers of cases, controls and importantly, various assay methods (vast majority various immunoassays), as well as various methods to generate cut-points. The retrospective study evaluated the diagnostic utility of LNSC measurements in 249 patients evaluated for possibility of CS because of various clinical conditions using liquid chromatography/tandem mass spectrometry method (LC-MS/MS). CS was confirmed in 47 patients (18·9%) and excluded in 202 (81·1%) patients at the time of analysis. Late-night salivary cortisol was abnormal or >2·8 nmol/l in 35 of 47 patients with CS; sensitivity of 74·5% and elevated in 20 of 202 patients who were found not to have CS; specificity 90·1%. Using receiver-operator characteristic statistics for calculation of the most optimal sensitivity and specificity, the cut-off based on this data was LNSC > 2·1 nmol/l with sensitivity of 83·0% and specificity of 84·2%. Analysis of data at one referral institution showed somewhat limited sensitivity of LNSC for diagnosis of CS using current reference ranges. © 2012 Blackwell Publishing Ltd.
Schuff, M M; Gore, J P; Nauman, E A
2013-12-01
The treatment of cancerous tumors is dependent upon the delivery of therapeutics through the blood by means of the microcirculation. Differences in the vasculature of normal and malignant tissues have been recognized, but it is not fully understood how these differences affect transport and the applicability of existing mathematical models has been questioned at the microscale due to the complex rheology of blood and fluid exchange with the tissue. In addition to determining an appropriate set of governing equations it is necessary to specify appropriate model parameters based on physiological data. To this end, a two stage sensitivity analysis is described which makes it possible to determine the set of parameters most important to the model's calibration. In the first stage, the fluid flow equations are examined and a sensitivity analysis is used to evaluate the importance of 11 different model parameters. Of these, only four substantially influence the intravascular axial flow providing a tractable set that could be calibrated using red blood cell velocity data from the literature. The second stage also utilizes a sensitivity analysis to evaluate the importance of 14 model parameters on extravascular flux. Of these, six exhibit high sensitivity and are integrated into the model calibration using a response surface methodology and experimental intra- and extravascular accumulation data from the literature (Dreher et al. in J Natl Cancer Inst 98(5):335-344, 2006). The model exhibits good agreement with the experimental results for both the mean extravascular concentration and the penetration depth as a function of time for inert dextran over a wide range of molecular weights.
Tang, Rongying; Prosser, Debra O.; Love, Donald R.
2016-01-01
The increasing diagnostic use of gene sequencing has led to an expanding dataset of novel variants that lie within consensus splice junctions. The challenge for diagnostic laboratories is the evaluation of these variants in order to determine if they affect splicing or are merely benign. A common evaluation strategy is to use in silico analysis, and it is here that a number of programmes are available online; however, currently, there are no consensus guidelines on the selection of programmes or protocols to interpret the prediction results. Using a collection of 222 pathogenic mutations and 50 benign polymorphisms, we evaluated the sensitivity and specificity of four in silico programmes in predicting the effect of each variant on splicing. The programmes comprised Human Splice Finder (HSF), Max Entropy Scan (MES), NNSplice, and ASSP. The MES and ASSP programmes gave the highest performance based on Receiver Operator Curve analysis, with an optimal cut-off of score reduction of 10%. The study also showed that the sensitivity of prediction is affected by the level of conservation of individual positions, with in silico predictions for variants at positions −4 and +7 within consensus splice sites being largely uninformative. PMID:27313609
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1996-01-01
Our first activity is based on delivery of code to Bob Evans (University of Miami) for integration and eventual delivery to the MODIS Science Data Support Team. As we noted in our previous semi-annual report, coding required the development and analysis of an end-to-end model of fluorescence line height (FLH) errors and sensitivity. This model is described in a paper in press in Remote Sensing of the Environment. Once the code was delivered to Miami, we continue to use this error analysis to evaluate proposed changes in MODIS sensor specifications and performance. Simply evaluating such changes on a band by band basis may obscure the true impacts of changes in sensor performance that are manifested in the complete algorithm. This is especially true with FLH that is sensitive to band placement and width. The error model will be used by Howard Gordon (Miami) to evaluate the effects of absorbing aerosols on the FLH algorithm performance. Presently, FLH relies only on simple corrections for atmospheric effects (viewing geometry, Rayleigh scattering) without correcting for aerosols. Our analysis suggests that aerosols should have a small impact relative to changes in the quantum yield of fluorescence in phytoplankton. However, the effect of absorbing aerosol is a new process and will be evaluated by Gordon.
Briso, André Luiz Fraga; Rahal, Vanessa; Azevedo, Fernanda Almeida de; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; Santos, Paulo Henrique Dos; Cintra, Luciano Tavares Angelo
2018-01-01
Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity.
Briso, André Luiz Fraga; Rahal, Vanessa; de Azevedo, Fernanda Almeida; Gallinari, Marjorie de Oliveira; Gonçalves, Rafael Simões; dos Santos, Paulo Henrique; Cintra, Luciano Tavares Angelo
2018-01-01
Abstract Objective The objective of this study was to evaluate dental sensitivity using visual analogue scale, a Computerized Visual Analogue Scale (CoVAS) and a neurosensory analyzer (TSA II) during at-home bleaching with 10% carbamide peroxide, with and without potassium oxalate. Materials and Methods Power Bleaching 10% containing potassium oxalate was used on one maxillary hemi-arch of the 25 volunteers, and Opalescence 10% was used on the opposite hemi-arch. Bleaching agents were used daily for 3 weeks. Analysis was performed before treatment, 24 hours later, 7, 14, and 21 days after the start of the treatment, and 7 days after its conclusion. The spontaneous tooth sensitivity was evaluated using the visual analogue scale and the sensitivity caused by a continuous 0°C stimulus was analyzed using CoVAS. The cold sensation threshold was also analyzed using the TSA II. The temperatures obtained were statistically analyzed using ANOVA and Tukey's test (α=5%). Results The data obtained with the other methods were also analyzed. 24 hours, 7 and 14 days before the beginning of the treatment, over 20% of the teeth presented spontaneous sensitivity, the normal condition was restored after the end of the treatment. Regarding the cold sensation temperatures, both products sensitized the teeth (p<0.05) and no differences were detected between the products in each period (p>0.05). In addition, when they were compared using CoVAS, Power Bleaching caused the highest levels of sensitivity in all study periods, with the exception of the 14th day of treatment. Conclusion We concluded that the bleaching treatment sensitized the teeth and the product with potassium oxalate was not able to modulate tooth sensitivity. PMID:29742258
Analysis of 238Pu and 56Fe Evaluated Data for Use in MYRRHA
NASA Astrophysics Data System (ADS)
Díez, C. J.; Cabellos, O.; Martínez, J. S.; Stankovskiy, A.; Van den Eynde, G.; Schillebeeckx, P.; Heyse, J.
2014-04-01
A sensitivity analysis on the multiplication factor, keff, to the cross section data has been carried out for the MYRRHA critical configuration in order to show the most relevant reactions. With these results, a further analysis on the 238Pu and 56Fe cross sections has been performed, comparing the evaluations provided in the JEFF-3.1.2 and ENDF/B-VII.1 libraries for these nuclides. Then, the effect in MYRRHA of the differences between evaluations are analysed, presenting the source of the differences. With these results, recommendations for the 56Fe and 238Pu evaluations are suggested. These calculations have been performed with SCALE6.1 and MCNPX-2.7e.
Measuring Road Network Vulnerability with Sensitivity Analysis
Jun-qiang, Leng; Long-hai, Yang; Liu, Wei-yi; Zhao, Lin
2017-01-01
This paper focuses on the development of a method for road network vulnerability analysis, from the perspective of capacity degradation, which seeks to identify the critical infrastructures in the road network and the operational performance of the whole traffic system. This research involves defining the traffic utility index and modeling vulnerability of road segment, route, OD (Origin Destination) pair and road network. Meanwhile, sensitivity analysis method is utilized to calculate the change of traffic utility index due to capacity degradation. This method, compared to traditional traffic assignment, can improve calculation efficiency and make the application of vulnerability analysis to large actual road network possible. Finally, all the above models and calculation method is applied to actual road network evaluation to verify its efficiency and utility. This approach can be used as a decision-supporting tool for evaluating the performance of road network and identifying critical infrastructures in transportation planning and management, especially in the resource allocation for mitigation and recovery. PMID:28125706
Formulary evaluation of second-generation cephamycin derivatives using decision analysis.
Barriere, S L
1991-10-01
Use of decision analysis in the formulary evaluation of the second-generation cephamycin derivatives cefoxitin, cefotetan, and cefmetazole is described. The rating system used was adapted from one used for the third-generation cephalosporins. Data on spectrum of activity, pharmacokinetics, adverse reactions, cost, and stability were taken from the published literature and the FDA-approved product labeling. The weighting scheme used for the third-generation cephalosporins was altered somewhat to reflect the more important aspects of the cephamycin derivatives and their potential role in surgical prophylaxis. Sensitivity analysis was done to assess the variability of the final scores when the assigned weights were varied within a reasonable range. Scores for cefmetazole and cefotetan were similar and did not differ significantly after sensitivity analysis. Cefoxitin scored significantly lower than the other two drugs. In the absence of data suggesting that the N-methyl thiotetrazole side chains of cefmetazole and cefotetan cause substantial toxicity, these two drugs can be considered the most cost-efficient members of the second-generation cephamycins.
Görtelmeyer, Roman; Schmidt, Jürgen; Suckfüll, Markus; Jastreboff, Pawel; Gebauer, Alexander; Krüger, Hagen; Wittmann, Werner
2011-08-01
To evaluate the reliability, dimensionality, predictive validity, construct validity, and sensitivity to change of the THI-12 total and sub-scales as diagnostic aids to describe and quantify tinnitus-evoked reactions and evaluate treatment efficacy. Explorative analysis of the German tinnitus handicap inventory (THI-12) to assess potential sensitivity to tinnitus therapy in placebo-controlled randomized studies. Correlation analysis, including Cronbach's coefficient α and explorative common factor analysis (EFA), was conducted within and between assessments to demonstrate the construct validity, dimensionality, and factorial structure of the THI-12. N = 618 patients suffering from subjective tinnitus who were to be screened to participate in a randomized, placebo-controlled, 16-week, longitudinal study. The THI-12 can reliably diagnose tinnitus-related impairments and disabilities and assess changes over time. The test-retest coefficient for neighboured visits was r > 0.69, the internal consistency of the THI-12 total score was α ≤ 0.79 and α ≤ 0.89 at subsequent visits. Predictability of THI-12 total score and overall variance increased with successive measurements. The three-factorial structure allowed for evaluation of factors that affect aspects of patients' health-related quality of life. The THI-12, with its three-factorial structure, is a simple, reliable, and valid instrument for the diagnosis and assessment of tinnitus and associated impairment over time.
Ruilong, Zong; Daohai, Xie; Li, Geng; Xiaohong, Wang; Chunjie, Wang; Lei, Tian
2017-01-01
To carry out a meta-analysis on the performance of fluorine-18-fluorodeoxyglucose (F-FDG) PET/computed tomography (PET/CT) for the evaluation of solitary pulmonary nodules. In the meta-analysis, we performed searches of several electronic databases for relevant studies, including Google Scholar, PubMed, Cochrane Library, and several Chinese databases. The quality of all included studies was assessed by Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). Two observers independently extracted data of eligible articles. For the meta-analysis, the total sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratios were pooled. A summary receiver operating characteristic curve was constructed. The I-test was performed to assess the impact of study heterogeneity on the results of the meta-analysis. Meta-regression and subgroup analysis were carried out to investigate the potential covariates that might have considerable impacts on heterogeneity. Overall, 12 studies were included in this meta-analysis, including a total of 1297 patients and 1301 pulmonary nodules. The pooled sensitivity, specificity, positive likelihood ratio, and negative likelihood ratio with corresponding 95% confidence intervals (CIs) were 0.82 (95% CI, 0.76-0.87), 0.81 (95% CI, 0.66-0.90), 4.3 (95% CI, 2.3-7.9), and 0.22 (95% CI, 0.16-0.30), respectively. Significant heterogeneity was observed in sensitivity (I=81.1%) and specificity (I=89.6%). Subgroup analysis showed that the best results for sensitivity (0.90; 95% CI, 0.68-0.86) and accuracy (0.93; 95% CI, 0.90-0.95) were present in a prospective study. The results of our analysis suggest that PET/CT is a useful tool for detecting malignant pulmonary nodules qualitatively. Although current evidence showed moderate accuracy for PET/CT in differentiating malignant from benign solitary pulmonary nodules, further work needs to be carried out to improve its reliability.
ERIC Educational Resources Information Center
Vujanovic, Anka A.; Arrindell, Willem A.; Bernstein, Amit; Norton, Peter J.; Zvolensky, Michael J.
2007-01-01
The present investigation examined the factor structure, internal consistency, and construct validity of the 16-item Anxiety Sensitivity Index (ASI; Reiss Peterson, Gursky, & McNally 1986) in a young adult sample (n = 420) from the Netherlands. Confirmatory factor analysis was used to comparatively evaluate two-factor, three-factor, and…
Ackerman, L K; Noonan, G O; Begley, T H
2009-12-01
The ambient ionization technique direct analysis in real time (DART) was characterized and evaluated for the screening of food packaging for the presence of packaging additives using a benchtop mass spectrometer (MS). Approximate optimum conditions were determined for 13 common food-packaging additives, including plasticizers, anti-oxidants, colorants, grease-proofers, and ultraviolet light stabilizers. Method sensitivity and linearity were evaluated using solutions and characterized polymer samples. Additionally, the response of a model additive (di-ethyl-hexyl-phthalate) was examined across a range of sample positions, DART, and MS conditions (temperature, voltage and helium flow). Under optimal conditions, molecular ion (M+H+) was the major ion for most additives. Additive responses were highly sensitive to sample and DART source orientation, as well as to DART flow rates, temperatures, and MS inlet voltages, respectively. DART-MS response was neither consistently linear nor quantitative in this setting, and sensitivity varied by additive. All additives studied were rapidly identified in multiple food-packaging materials by DART-MS/MS, suggesting this technique can be used to screen food packaging rapidly. However, method sensitivity and quantitation requires further study and improvement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cumberland, Riley M.; Williams, Kent Alan; Jarrell, Joshua J.
This report evaluates how the economic environment (i.e., discount rate, inflation rate, escalation rate) can impact previously estimated differences in lifecycle costs between an integrated waste management system with an interim storage facility (ISF) and a similar system without an ISF.
Compliance and stress sensitivity of spur gear teeth
NASA Technical Reports Server (NTRS)
Cornell, R. W.
1983-01-01
The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.
Combustor liner durability analysis
NASA Technical Reports Server (NTRS)
Moreno, V.
1981-01-01
An 18 month combustor liner durability analysis program was conducted to evaluate the use of advanced three dimensional transient heat transfer and nonlinear stress-strain analyses for modeling the cyclic thermomechanical response of a simulated combustor liner specimen. Cyclic life prediction technology for creep/fatigue interaction is evaluated for a variety of state-of-the-art tools for crack initiation and propagation. The sensitivity of the initiation models to a change in the operating conditions is also assessed.
Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J
2014-12-01
This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-01
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method. PMID:26761006
Li, Ke; Zhang, Qiuju; Wang, Kun; Chen, Peng; Wang, Huaqing
2016-01-08
A new fault diagnosis method for rotating machinery based on adaptive statistic test filter (ASTF) and Diagnostic Bayesian Network (DBN) is presented in this paper. ASTF is proposed to obtain weak fault features under background noise, ASTF is based on statistic hypothesis testing in the frequency domain to evaluate similarity between reference signal (noise signal) and original signal, and remove the component of high similarity. The optimal level of significance α is obtained using particle swarm optimization (PSO). To evaluate the performance of the ASTF, evaluation factor Ipq is also defined. In addition, a simulation experiment is designed to verify the effectiveness and robustness of ASTF. A sensitive evaluation method using principal component analysis (PCA) is proposed to evaluate the sensitiveness of symptom parameters (SPs) for condition diagnosis. By this way, the good SPs that have high sensitiveness for condition diagnosis can be selected. A three-layer DBN is developed to identify condition of rotation machinery based on the Bayesian Belief Network (BBN) theory. Condition diagnosis experiment for rolling element bearings demonstrates the effectiveness of the proposed method.
Economic evaluation of DNA ploidy analysis vs liquid-based cytology for cervical screening.
Nghiem, V T; Davies, K R; Beck, J R; Follen, M; MacAulay, C; Guillaud, M; Cantor, S B
2015-06-09
DNA ploidy analysis involves automated quantification of chromosomal aneuploidy, a potential marker of progression toward cervical carcinoma. We evaluated the cost-effectiveness of this method for cervical screening, comparing five ploidy strategies (using different numbers of aneuploid cells as cut points) with liquid-based Papanicolaou smear and no screening. A state-transition Markov model simulated the natural history of HPV infection and possible progression into cervical neoplasia in a cohort of 12-year-old females. The analysis evaluated cost in 2012 US$ and effectiveness in quality-adjusted life-years (QALYs) from a health-system perspective throughout a lifetime horizon in the US setting. We calculated incremental cost-effectiveness ratios (ICERs) to determine the best strategy. The robustness of optimal choices was examined in deterministic and probabilistic sensitivity analyses. In the base-case analysis, the ploidy 4 cell strategy was cost-effective, yielding an increase of 0.032 QALY and an ICER of $18 264/QALY compared to no screening. For most scenarios in the deterministic sensitivity analysis, the ploidy 4 cell strategy was the only cost-effective strategy. Cost-effectiveness acceptability curves showed that this strategy was more likely to be cost-effective than the Papanicolaou smear. Compared to the liquid-based Papanicolaou smear, screening with a DNA ploidy strategy appeared less costly and comparably effective.
Pressure sensitivity analysis of fiber Bragg grating sensors
NASA Astrophysics Data System (ADS)
Mrad, Nezih; Sridharan, Vasant; Kazemi, Alex
2014-09-01
Recent development in fiber optic sensing technology has mainly focused on discrete sensing, particularly, sensing systems with potential multiplexing and multi-parameter capabilities. Bragg grating fiber optic sensors have emerged as the non-disputed champion for multiplexing and simultaneous multi-parameter sensing for emerging high value structural components, advanced processing and manufacturing capabilities and increased critical infrastructure resilience applications. Although the number of potential applications for this sensing technology is large and spans the domains of medicine, manufacturing, aerospace, and public safety; critical issues such as fatigue life, sensitivity, accuracy, embeddability, material/sensor interface integrity, and universal demodulation systems still need to be addressed. The purpose of this paper is to primarily evaluate Commercial-Of-The-Shelf (COTS) Fiber Bragg Grating (FBG) sensors' sensitivity to pressure, often neglected in several applications. The COTS fiber sensitivity to pressure is further evaluated for two types of coatings (Polyimide and Acrylate), and different arrangements (arrayed and single).
Intelligent interface design and evaluation
NASA Technical Reports Server (NTRS)
Greitzer, Frank L.
1988-01-01
Intelligent interface concepts and systematic approaches to assessing their functionality are discussed. Four general features of intelligent interfaces are described: interaction efficiency, subtask automation, context sensitivity, and use of an appropriate design metaphor. Three evaluation methods are discussed: Functional Analysis, Part-Task Evaluation, and Operational Testing. Design and evaluation concepts are illustrated with examples from a prototype expert system interface for environmental control and life support systems for manned space platforms.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Fraysse, Bodvaël; Barthélémy, Inès; Qannari, El Mostafa; Rouger, Karl; Thorin, Chantal; Blot, Stéphane; Le Guiner, Caroline; Chérel, Yan; Hogrel, Jean-Yves
2017-04-12
Accelerometric analysis of gait abnormalities in golden retriever muscular dystrophy (GRMD) dogs is of limited sensitivity, and produces highly complex data. The use of discriminant analysis may enable simpler and more sensitive evaluation of treatment benefits in this important preclinical model. Accelerometry was performed twice monthly between the ages of 2 and 12 months on 8 healthy and 20 GRMD dogs. Seven accelerometric parameters were analysed using linear discriminant analysis (LDA). Manipulation of the dependent and independent variables produced three distinct models. The ability of each model to detect gait alterations and their pattern change with age was tested using a leave-one-out cross-validation approach. Selecting genotype (healthy or GRMD) as the dependent variable resulted in a model (Model 1) allowing a good discrimination between the gait phenotype of GRMD and healthy dogs. However, this model was not sufficiently representative of the disease progression. In Model 2, age in months was added as a supplementary dependent variable (GRMD_2 to GRMD_12 and Healthy_2 to Healthy_9.5), resulting in a high overall misclassification rate (83.2%). To improve accuracy, a third model (Model 3) was created in which age was also included as an explanatory variable. This resulted in an overall misclassification rate lower than 12%. Model 3 was evaluated using blinded data pertaining to 81 healthy and GRMD dogs. In all but one case, the model correctly matched gait phenotype to the actual genotype. Finally, we used Model 3 to reanalyse data from a previous study regarding the effects of immunosuppressive treatments on muscular dystrophy in GRMD dogs. Our model identified significant effect of immunosuppressive treatments on gait quality, corroborating the original findings, with the added advantages of direct statistical analysis with greater sensitivity and more comprehensible data representation. Gait analysis using LDA allows for improved analysis of accelerometry data by applying a decision-making analysis approach to the evaluation of preclinical treatment benefits in GRMD dogs.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
Thiele, Elizabeth A; Cama, Vitaliano A; Lakwo, Thomson; Mekasha, Sindeaw; Abanyie, Francisca; Sleshi, Markos; Kebede, Amha; Cantey, Paul T
2016-04-01
Microscopic evaluation of skin biopsies is the monitoring and evaluation (M and E) method currently used by multiple onchocerciasis elimination programs in Africa. However, as repeated mass drug administration suppresses microfilarial loads, the sensitivity and programmatic utility of skin snip microscopy is expected to decrease. Using a pan-filarial real-time polymerase chain reaction with melt curve analysis (qPCR-MCA), we evaluated 1) the use of a single-step molecular assay for detecting and identifying Onchocerca volvulus microfilariae in residual skin snips and 2) the sensitivity of skin snip microscopy relative to qPCR-MCA. Skin snips were collected and examined with routine microscopy in hyperendemic regions of Uganda and Ethiopia (N= 500 each) and "residual" skin snips (tissue remaining after induced microfilarial emergence) were tested with qPCR-MCA. qPCR-MCA detected Onchocerca DNA in 223 residual snips: 139 of 147 microscopy(+) and 84 among microscopy(-) snips, suggesting overall sensitivity of microscopy was 62.3% (139/223) relative to qPCR-MCA (75.6% in Uganda and 28.6% in Ethiopia). These findings demonstrate the insufficient sensitivity of skin snip microscopy for reliable programmatic monitoring. Molecular tools such as qPCR-MCA can augment sensitivity and provide diagnostic confirmation of skin biopsies and will be useful for evaluation or validation of new onchocerciasis M and E tools. © The American Society of Tropical Medicine and Hygiene.
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. PMID:26799483
Casadebaig, Pierre; Zheng, Bangyou; Chapman, Scott; Huth, Neil; Faivre, Robert; Chenu, Karine
2016-01-01
A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites × 125 years), management practices (3 sowing dates × 3 nitrogen fertilization levels) and CO2 (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait × environment × management landscape (∼ 82 million individual simulations in total). The patterns of parameter × environment × management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference cultivar. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identification of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement.
Evaluating linguistic equivalence of patient-reported outcomes in a cancer clinical trial.
Hahn, Elizabeth A; Bode, Rita K; Du, Hongyan; Cella, David
2006-01-01
In order to make meaningful cross-cultural or cross-linguistic comparisons of health-related quality of life (HRQL) or to pool international research data, it is essential to create unbiased measures that can detect clinically important differences. When HRQL scores differ between cultural/linguistic groups, it is important to determine whether this reflects real group differences, or is the result of systematic measurement variability. To investigate the linguistic measurement equivalence of a cancer-specific HRQL questionnaire, and to conduct a sensitivity analysis of treatment differences in HRQL in a clinical trial. Patients with newly diagnosed chronic myelogenous leukemia (n = 1049) completed serial HRQL assessments in an international Phase III trial. Two types of differential item functioning (uniform and non-uniform) were evaluated using item response theory and classical test theory approaches. A sensitivity analysis was conducted to compare HRQL between treatment arms using items without evidence of differential functioning. Among 27 items, nine (33%) did not exhibit any evidence of differential functioning in both linguistic comparisons (English versus French, English versus German). Although 18 items functioned differently, there was no evidence of systematic bias. In a sensitivity analysis, adjustment for differential functioning affected the magnitude, but not the direction or interpretation of clinical trial treatment arm differences. Sufficient sample sizes were available for only three of the eight language groups. Identification of differential functioning in two-thirds of the items suggests that current psychometric methods may be too sensitive. Enhanced methodologies are needed to differentiate trivial from substantive differential item functioning. Systematic variability in HRQL across different groups can be evaluated for its effect upon clinical trial results; a practice recommended when data are pooled across cultural or linguistic groups to make conclusions about treatment effects.
Chomsky-Higgins, Kathryn; Seib, Carolyn; Rochefort, Holly; Gosnell, Jessica; Shen, Wen T; Kahn, James G; Duh, Quan-Yang; Suh, Insoo
2018-01-01
Guidelines for management of small adrenal incidentalomas are mutually inconsistent. No cost-effectiveness analysis has been performed to evaluate rigorously the relative merits of these strategies. We constructed a decision-analytic model to evaluate surveillance strategies for <4cm, nonfunctional, benign-appearing adrenal incidentalomas. We evaluated 4 surveillance strategies: none, one-time, annual for 2 years, and annual for 5 years. Threshold and sensitivity analyses assessed robustness of the model. Costs were represented in 2016 US dollars and health outcomes in quality-adjusted life-years. No surveillance has an expected net cost of $262 and 26.22 quality-adjusted life-years. One-time surveillance costs $158 more and adds 0.2 quality-adjusted life-years for an incremental cost-effectiveness ratio of $778/quality-adjusted life-years. The strategies involving more surveillance were dominated by the no surveillance and one-time surveillance strategies less effective and more expensive. Above a 0.7% prevalence of adrenocortical carcinoma, one-time surveillance was the most effective strategy. The results were robust to all sensitivity analyses of disease prevalence, sensitivity, and specificity of diagnostic assays and imaging as well as health state utility. For patients with a < 4cm, nonfunctional, benign-appearing mass, one-time follow-up evaluation involving a noncontrast computed tomography and biochemical evaluation is cost-effective. Strategies requiring more surveillance accrue more cost without incremental benefit. Copyright © 2017 Elsevier Inc. All rights reserved.
Park, Joo Kyung; Kang, Ki Joo; Oh, Cho Rong; Lee, Jong Kyun; Lee, Kyu Taek; Jang, Kee Taek; Park, Sang-Mo; Lee, Kwang Hyuck
2016-01-01
Abstract Endoscopic ultrasound-guided fine-needle aspiration (EUS-FNA) has become one of the most useful diagnostic modalities for the diagnosis of pancreatic mass. The aim of this study was to investigate the role of analyzing the minimal specimens obtained by EUS-FNA for the diagnosis of solid masses of pancreas. This study consisted of retrospective and prospective analyses. The retrospective study was performed on 116 patients who underwent EUS-FNA of solid masses for cytological smear, histological analysis, and combined analysis including immunohistochemical (IHC) staining. In the prospective study, 79 patients were enrolled to evaluate the quality and accuracy of EUS-FNA histological analysis and feasibility of IHC staining. The final diagnoses of all patients included pancreatic cancer (n = 126), nonpancreatic cancer (n = 21), other neoplasm (n = 27), and benign lesions (n = 21). In our retrospective study, the combined analysis was more sensitive than cytological analysis alone (P < 0.01). The overall sensitivity of cytology, histology, and combined analysis was 69.8%, 67.2%, and 81.8%, respectively. In the prospective analysis, 64.2% of all punctures were helpful for determining the diagnosis and 40.7% provided sufficient tissue for IHC staining. Histological analysis was helpful for diagnosis in 74.7% of patients. IHC staining was necessary for a definite diagnosis in 11.4% of patients, especially in the cases of nonmalignant pancreatic mass. Histological analysis and IHC study of EUS-FNA specimens was useful for the accurate diagnosis of pancreatic and peripancreatic lesions. Combined analysis showed significantly higher sensitivity than cytology alone because IHC staining was helpful for a diagnosis in some patients. PMID:27227937
Methods for comparative evaluation of propulsion system designs for supersonic aircraft
NASA Technical Reports Server (NTRS)
Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.
1976-01-01
The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.
Automatic Target Recognition Classification System Evaluation Methodology
2002-09-01
Testing Set of Two-Class XOR Data (250 Samples)......................................... 2-59 2.27 Decision Analysis Process Flow Chart...ROC curve meta - analysis , which is the estimation of the true ROC curve of a given diagnostic system through ROC analysis across many studies or...technique can be very effective in sensitivity analysis ; trying to determine which data points have the most effect on the solution, and in
Wojtysiak, Magdalena; Huber, Juliusz; Wiertel-Krawczuk, Agnieszka; Szymankiewicz-Szukała, Agnieszka; Moskal, Jakub; Janicki, Jacek
2014-10-01
The application of complex neurophysiological examination including motor evoked potentials (MEP) for pre- and postoperative evaluation of patients experiencing acute sciatica. The assessment of sensitivity and specificity of needle electromyography, MEP, and H-reflex examinations. The comparative analysis of preoperative and postoperative neurophysiological examination. In spite of the fact that complex neurophysiological diagnostic tools seem to be important for interpretation of incompatible results of neuroimaging and clinical examination, especially in the patients qualified for surgical treatment, their application has never been completely analyzed and documented. Pre- and postoperative electromyography, electroneurography, F-waves, H-reflex, and MEP examination were performed in 23 patients with confirmed disc-root conflict at lumbosacral spine. Clinical evaluation included examination of sensory perception for L5-S1 dermatomes, muscles strength with Lovett's scale, deep tendon reflexes, pain intensity with visual analogue scale, and straight leg raising test. Sensitivity of electromyography at rest and MEP examination for evaluation of L5-S1 roots injury was 22% to 63% and 31% to 56% whereas specificity was 71% to 83% and 57% to 86%, respectively. H-reflex sensitivity and specificity for evaluation of S1 root injury were 56% and 67%, respectively. A significant improvement of root latency parameter in postoperative MEP studies as compared with preoperative was recorded for L5 (P = 0.039) and S1 root's levels (P = 0.05). The analysis of the results from neurophysiological tests together with neuroimaging and clinical examination allow for a precise preoperative indication of the lumbosacral roots injury and accurate postoperative evaluation of patients experiencing sciatica. 3.
Value of circulating cell-free DNA analysis as a diagnostic tool for breast cancer: a meta-analysis
Ma, Xuelei; Zhang, Jing; Hu, Xiuying
2017-01-01
Objectives The aim of this study was to systematically evaluate the diagnostic value of cell free DNA (cfDNA) for breast cancer. Results Among 308 candidate articles, 25 with relevant diagnostic screening qualified for final analysis. The mean sensitivity, specificity and area under the curve (AUC) of SROC plots for 24 studies that distinguished breast cancer patients from healthy controls were 0.70, 0.87, and 0.9314, yielding a DOR of 32.31. When analyzed in subgroups, the 14 quantitative studies produced sensitivity, specificity, AUC, and a DOR of 0.78, 0.83, 0.9116, and 24.40. The 10 qualitative studies produced 0.50, 0.98, 0.9919, and 68.45. For 8 studies that distinguished malignant breast cancer from benign diseases, the specificity, sensitivity, AUC and DOR were 0.75, 0.79, 0.8213, and 9.49. No covariate factors had a significant correlation with relative DOR. Deek's funnel plots indicated an absence of publication bias. Materials and Methods Databases were searched for studies involving the use of cfDNA to diagnose breast cancer. The studies were analyzed to determine sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, diagnostic odds ratio (DOR), and the summary receiver operating characteristic (SROC). Covariates were evaluated for effect on relative DOR. Deek's Funnel plots were generated to measure publication bias. Conclusions Our analysis suggests a promising diagnostic potential of using cfDNA for breast cancer screening, but this diagnostic method is not yet independently sufficient. Further work refining qualitative cfDNA assays will improve the correct diagnosis of breast cancers. PMID:28460452
Luo, Chuan; Li, Zhaofu; Li, Hengpeng; Chen, Xiaomin
2015-01-01
The application of hydrological and water quality models is an efficient approach to better understand the processes of environmental deterioration. This study evaluated the ability of the Annualized Agricultural Non-Point Source (AnnAGNPS) model to predict runoff, total nitrogen (TN) and total phosphorus (TP) loading in a typical small watershed of a hilly region near Taihu Lake, China. Runoff was calibrated and validated at both an annual and monthly scale, and parameter sensitivity analysis was performed for TN and TP before the two water quality components were calibrated. The results showed that the model satisfactorily simulated runoff at annual and monthly scales, both during calibration and validation processes. Additionally, results of parameter sensitivity analysis showed that the parameters Fertilizer rate, Fertilizer organic, Canopy cover and Fertilizer inorganic were more sensitive to TN output. In terms of TP, the parameters Residue mass ratio, Fertilizer rate, Fertilizer inorganic and Canopy cover were the most sensitive. Based on these sensitive parameters, calibration was performed. TN loading produced satisfactory results for both the calibration and validation processes, whereas the performance of TP loading was slightly poor. The simulation results showed that AnnAGNPS has the potential to be used as a valuable tool for the planning and management of watersheds. PMID:26364642
Solar energy system economic evaluation for IBM System 3, Glendo, Wyoming
NASA Technical Reports Server (NTRS)
1980-01-01
This analysis was based on the technical and economic models in f-chart design procedures with inputs based on the characteristics of the parameters of present worth of system cost over a projected twenty year life: life cycle savings, year of positive savings, and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables was also investigated.
A flexible, interpretable framework for assessing sensitivity to unmeasured confounding.
Dorie, Vincent; Harada, Masataka; Carnegie, Nicole Bohme; Hill, Jennifer
2016-09-10
When estimating causal effects, unmeasured confounding and model misspecification are both potential sources of bias. We propose a method to simultaneously address both issues in the form of a semi-parametric sensitivity analysis. In particular, our approach incorporates Bayesian Additive Regression Trees into a two-parameter sensitivity analysis strategy that assesses sensitivity of posterior distributions of treatment effects to choices of sensitivity parameters. This results in an easily interpretable framework for testing for the impact of an unmeasured confounder that also limits the number of modeling assumptions. We evaluate our approach in a large-scale simulation setting and with high blood pressure data taken from the Third National Health and Nutrition Examination Survey. The model is implemented as open-source software, integrated into the treatSens package for the R statistical programming language. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Reevaluation of analytical methods for photogenerated singlet oxygen
Nakamura, Keisuke; Ishiyama, Kirika; Ikai, Hiroyo; Kanno, Taro; Sasaki, Keiichi; Niwano, Yoshimi; Kohno, Masahiro
2011-01-01
The aim of the present study is to compare different analytical methods for singlet oxygen and to discuss an appropriate way to evaluate the yield of singlet oxygen photogenerated from photosensitizers. Singlet oxygen photogenerated from rose bengal was evaluated by electron spin resonance analysis using sterically hindered amines, spectrophotometric analysis of 1,3-diphenylisobenzofuran oxidation, and analysis of fluorescent probe (Singlet Oxygen Sensor Green®). All of the analytical methods could evaluate the relative yield of singlet oxygen. The sensitivity of the analytical methods was 1,3-diphenylisobenzofuran < electron spin resonance < Singlet Oxygen Sensor Green®. However, Singlet Oxygen Sensor Green® could be used only when the concentration of rose bengal was very low (<1 µM). In addition, since the absorption spectra of 1,3-diphenylisobenzofuran is considerably changed by irradiation of 405 nm laser, photosensitizers which are excited by light with a wavelength of around 400 nm such as hematoporphyrin cannot be used in the 1,3-diphenylisobenzofuran oxidation method. On the other hand, electron spin resonance analysis using a sterically hindered amine, especially 2,2,6,6-tetramethyl-4-piperidinol and 2,2,5,5-tetramethyl-3-pyrroline-3-carboxamide, had proper sensitivity and wide detectable range for the yield of photogenerated singlet oxygen. Therefore, in photodynamic therapy, it is suggested that the relative yield of singlet oxygen generated by various photosensitizers can be evaluated properly by electron spin resonance analysis. PMID:21980223
Fan, Qin; Liu, Shuliang; Li, Juan; Huang, Tingting
2012-05-01
To analyze the antimicrobial susceptibility of lactic acid bacteria (LAB) from yogurt, and to provide references for evaluating the safety of LAB and screening safe strains. The sensitivity of 43 LAB strains, including 14 strains of Streptococcus thermophilus, 12 strains of Lactobacillus acidophilus, 9 strains of Lactobacillus bulgaricus and 8 strains of Bifidobacterium, to 22 antibiotics were tested by agar plate dilution method. All 43 LAB strains were resistant to trimethoprim, nalidixic acid, ciprofloxacin, lomefloxacin, danofloxacin and polymyxin E. Their resistances to kanamycin, tetracycline, clindamycin, doxycycline and cephalothin were varied. The sensitivity to other antibiotics were sensitive or moderate. All isolates were multidrug-resistant. The antimicrobial resistance of tested LAB strains was comparatively serious, and continuously monitoring their antimicrobial resistance and evaluating their safety should be strengthened.
Sensitivity analysis of limit state functions for probability-based plastic design
NASA Technical Reports Server (NTRS)
Frangopol, D. M.
1984-01-01
The evaluation of the total probability of a plastic collapse failure P sub f for a highly redundant structure of random interdependent plastic moments acted on by random interdepedent loads is a difficult and computationally very costly process. The evaluation of reasonable bounds to this probability requires the use of second moment algebra which involves man statistical parameters. A computer program which selects the best strategy for minimizing the interval between upper and lower bounds of P sub f is now in its final stage of development. The relative importance of various uncertainties involved in the computational process on the resulting bounds of P sub f, sensitivity is analyzed. Response sensitivities for both mode and system reliability of an ideal plastic portal frame are shown.
Estimating costs in the economic evaluation of medical technologies.
Luce, B R; Elixhauser, A
1990-01-01
The complexities and nuances of evaluating the costs associated with providing medical technologies are often underestimated by analysts engaged in economic evaluations. This article describes the theoretical underpinnings of cost estimation, emphasizing the importance of accounting for opportunity costs and marginal costs. The various types of costs that should be considered in an analysis are described; a listing of specific cost elements may provide a helpful guide to analysis. The process of identifying and estimating costs is detailed, and practical recommendations for handling the challenges of cost estimation are provided. The roles of sensitivity analysis and discounting are characterized, as are determinants of the types of costs to include in an analysis. Finally, common problems facing the analyst are enumerated with suggestions for managing these problems.
Performance evaluation of a lossy transmission lines based diode detector at cryogenic temperature.
Villa, E; Aja, B; de la Fuente, L; Artal, E
2016-01-01
This work is focused on the design, fabrication, and performance analysis of a square-law Schottky diode detector based on lossy transmission lines working under cryogenic temperature (15 K). The design analysis of a microwave detector, based on a planar gallium-arsenide low effective Schottky barrier height diode, is reported, which is aimed for achieving large input return loss as well as flat sensitivity versus frequency. The designed circuit demonstrates good sensitivity, as well as a good return loss in a wide bandwidth at Ka-band, at both room (300 K) and cryogenic (15 K) temperatures. A good sensitivity of 1000 mV/mW and input return loss better than 12 dB have been achieved when it works as a zero-bias Schottky diode detector at room temperature, increasing the sensitivity up to a minimum of 2200 mV/mW, with the need of a DC bias current, at cryogenic temperature.
Dong, Fan; Shen, Yifan; Xu, Tianyuan; Wang, Xianjin; Gao, Fengbin; Zhong, Shan; Chen, Shanwen; Shen, Zhoujun
2018-03-21
Previous researches pointed out that the measurement of urine fibronectin (Fn) could be a potential diagnostic test for bladder cancer (BCa). We conducted this meta-analysis to fully assess the diagnostic value of urine Fn for BCa detection. A systematic literature search in PubMed, ISI Web of Science, EMBASE, Cochrane library, and CBM was carried out to identify eligible studies evaluating the urine Fn in diagnosing BCa. Pooled sensitivity, specificity, and diagnostic odds ratio (DOR) with their 95% confidence intervals (CIs) were calculated, and summary receiver operating characteristic (SROC) curves were established. We applied the STATA 13.0, Meta-Disc 1.4, and RevMan 5.3 software to the meta-analysis. Eight separate studies with 744 bladder cancer patients were enrolled in this meta-analysis. The pooled sensitivity, specificity, and DOR were 0.80 (95%CI = 0.77-0.83), 0.79 (95%CI = 0.73-0.84), and 15.18 (95%CI = 10.07-22.87), respectively, and the area under the curve (AUC) of SROC was 0.83 (95%CI = 0.79-0.86). The diagnostic power of a combined method (urine Fn combined with urine cytology) was also evaluated, and its sensitivity and AUC were significantly higher (0.86 (95%CI = 0.82-0.90) and 0.89 (95%CI = 0.86-0.92), respectively). Meta-regression along with subgroup analysis based on various covariates revealed the potential sources of the heterogeneity and the detailed diagnostic value of each subgroup. Sensitivity analysis supported that the result was robust. No threshold effect and publication bias were found in this meta-analysis. Urine Fn may become a promising non-invasive biomarker for bladder cancer with a relatively satisfactory diagnostic power. And the combination of urine Fn with cytology could be an alternative option for detecting BCa in clinical practice. The potential value of urine Fn still needs to be validated in large, multi-center, and prospective studies.
Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs.
Hyytiäinen, Heli K; Mölsä, Sari H; Junnila, Jouni T; Laitinen-Vapaavuori, Outi M; Hielm-Björkman, Anna K
2013-04-08
Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion.The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher's exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems.
Ranking of physiotherapeutic evaluation methods as outcome measures of stifle functionality in dogs
2013-01-01
Background Various physiotherapeutic evaluation methods are used to assess the functionality of dogs with stifle problems. Neither validity nor sensitivity of these methods has been investigated. This study aimed to determine the most valid and sensitive physiotherapeutic evaluation methods for assessing functional capacity in hind limbs of dogs with stifle problems and to serve as a basis for developing an indexed test for these dogs. A group of 43 dogs with unilateral surgically treated cranial cruciate ligament deficiency and osteoarthritic findings was used to test different physiotherapeutic evaluation methods. Twenty-one healthy dogs served as the control group and were used to determine normal variation in static weight bearing and range of motion. The protocol consisted of 14 different evaluation methods: visual evaluation of lameness, visual evaluation of diagonal movement, visual evaluation of functional active range of motion and difference in thrust of hind limbs via functional tests (sit-to-move and lie-to-move), movement in stairs, evaluation of hind limb muscle atrophy, manual evaluation of hind limb static weight bearing, quantitative measurement of static weight bearing of hind limbs with bathroom scales, and passive range of motion of hind limb stifle (flexion and extension) and tarsal (flexion and extension) joints using a universal goniometer. The results were compared with those from an orthopaedic examination, force plate analysis, radiographic evaluation, and a conclusive assessment. Congruity of the methods was assessed with a combination of three statistical approaches (Fisher’s exact test and two differently calculated proportions of agreeing observations), and the components were ranked from best to worst. Sensitivities of all of the physiotherapeutic evaluation methods against each standard were calculated. Results Evaluation of asymmetry in a sitting and lying position, assessment of muscle atrophy, manual and measured static weight bearing, and measurement of stifle passive range of motion were the most valid and sensitive physiotherapeutic evaluation methods. Conclusions Ranking of the various physiotherapeutic evaluation methods was accomplished. Several of these methods can be considered valid and sensitive when examining the functionality of dogs with stifle problems. PMID:23566355
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldstein, Peter
2014-01-24
This report describes the sensitivity of predicted nuclear fallout to a variety of model input parameters, including yield, height of burst, particle and activity size distribution parameters, wind speed, wind direction, topography, and precipitation. We investigate sensitivity over a wide but plausible range of model input parameters. In addition, we investigate a specific example with a relatively narrow range to illustrate the potential for evaluating uncertainties in predictions when there are more precise constraints on model parameters.
A Fiscal Analysis of Fixed-Amount Federal Grants-in-Aid: The Case of Vocational Education.
ERIC Educational Resources Information Center
Patterson, Philip D., Jr.
A fiscal analysis of fixed-amount Federal grant programs using the criteria of effectiveness, efficiency, and equity is essential to an evaluation of the Federal grant structure. Measures of program need should be current, comparable over time and among states, and subjected to sensitivity analysis so that future grants can be estimated. Income…
Stability and sensitivity of ABR flow control protocols
NASA Astrophysics Data System (ADS)
Tsai, Wie K.; Kim, Yuseok; Chiussi, Fabio; Toh, Chai-Keong
1998-10-01
This tutorial paper surveys the important issues in stability and sensitivity analysis of ABR flow control of ATM networks. THe stability and sensitivity issues are formulated in a systematic framework. Four main cause of instability in ABR flow control are identified: unstable control laws, temporal variations of available bandwidth with delayed feedback control, misbehaving components, and interactions between higher layer protocols and ABR flow control. Popular rate-based ABR flow control protocols are evaluated. Stability and sensitivity is shown to be the fundamental issues when the network has dynamically-varying bandwidth. Simulation result confirming the theoretical studies are provided. Open research problems are discussed.
Dynamic sensitivity analysis of biological systems
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2008-01-01
Background A mathematical model to understand, predict, control, or even design a real biological system is a central theme in systems biology. A dynamic biological system is always modeled as a nonlinear ordinary differential equation (ODE) system. How to simulate the dynamic behavior and dynamic parameter sensitivities of systems described by ODEs efficiently and accurately is a critical job. In many practical applications, e.g., the fed-batch fermentation systems, the system admissible input (corresponding to independent variables of the system) can be time-dependent. The main difficulty for investigating the dynamic log gains of these systems is the infinite dimension due to the time-dependent input. The classical dynamic sensitivity analysis does not take into account this case for the dynamic log gains. Results We present an algorithm with an adaptive step size control that can be used for computing the solution and dynamic sensitivities of an autonomous ODE system simultaneously. Although our algorithm is one of the decouple direct methods in computing dynamic sensitivities of an ODE system, the step size determined by model equations can be used on the computations of the time profile and dynamic sensitivities with moderate accuracy even when sensitivity equations are more stiff than model equations. To show this algorithm can perform the dynamic sensitivity analysis on very stiff ODE systems with moderate accuracy, it is implemented and applied to two sets of chemical reactions: pyrolysis of ethane and oxidation of formaldehyde. The accuracy of this algorithm is demonstrated by comparing the dynamic parameter sensitivities obtained from this new algorithm and from the direct method with Rosenbrock stiff integrator based on the indirect method. The same dynamic sensitivity analysis was performed on an ethanol fed-batch fermentation system with a time-varying feed rate to evaluate the applicability of the algorithm to realistic models with time-dependent admissible input. Conclusion By combining the accuracy we show with the efficiency of being a decouple direct method, our algorithm is an excellent method for computing dynamic parameter sensitivities in stiff problems. We extend the scope of classical dynamic sensitivity analysis to the investigation of dynamic log gains of models with time-dependent admissible input. PMID:19091016
Tsyganenko, A Ia; Kon', E V
2007-01-01
The study was conducted to evaluate sensitivity to 44 antibiotics of pathogens isolated from 183 women with genital inflammatory diseases and to offer schemes of antibacterial treatment. The pathogens (66.8%) were in associations. The probability of isolation of main bacteria and sexually transmitted microorganisms in different associations was estimated in the work. Using the methods of clustering analysis all the tested antibiotics were divided into 3 groups, depending on their antimicrobial activity toward bacteria isolated both in monoculture and in associations. Furagin, cefotaxime, gentamicin, cefoperazon, ceftriaxon, ciprofloxacin, pefloxacin, as well as, cefazolin, zoxan, ofloxacin, and lomefloxacin were shown to be the most effective antibiotics in vitro. The least activity was diplayed by ectericid, chlorophillipt, and ampiox. These data should be considered when choosing the antibacterial treatment of genital inflammatory diseases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lijuan; Gonder, Jeff; Burton, Evan
This study evaluates the costs and benefits associated with the use of a plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep analysis was performed over a number of a different battery sizes, charging powers, and charging stations. The net present value was calculated for each vehicle design and provided the basis for the design evaluation. In all cases, given present day economic assumptions, the conventional bus achieved the lowest net present value while the optimal plug-in hybrid electric bus scenario reached lower lifetime costs than themore » hybrid electric bus. The study also performed parameter sensitivity analysis under low market potential assumptions and high market potential assumptions. The net present value of plug-in hybrid electric bus is close to that of conventional bus.« less
NASA Astrophysics Data System (ADS)
Mottyll, S.; Skoda, R.
2015-12-01
A compressible inviscid flow solver with barotropic cavitation model is applied to two different ultrasonic horn set-ups and compared to hydrophone, shadowgraphy as well as erosion test data. The statistical analysis of single collapse events in wall-adjacent flow regions allows the determination of the flow aggressiveness via load collectives (cumulative event rate vs collapse pressure), which show an exponential decrease in agreement to studies on hydrodynamic cavitation [1]. A post-processing projection of event rate and collapse pressure on a reference grid reduces the grid dependency significantly. In order to evaluate the erosion-sensitive areas a statistical analysis of transient wall loads is utilised. Predicted erosion sensitive areas as well as temporal pressure and vapour volume evolution are in good agreement to the experimental data.
Evaluation of Four Diagnostic Tests for Insulin Dysregulation in Adult Light-Breed Horses.
Dunbar, L K; Mielnicki, K A; Dembek, K A; Toribio, R E; Burns, T A
2016-05-01
Several tests have been evaluated in horses for quantifying insulin dysregulation to support a diagnosis of equine metabolic syndrome. Comparing the performance of these tests in the same horses will provide clarification of their accuracy in the diagnosis of equine insulin dysregulation. The aim of this study was to evaluate the agreement between basal serum insulin concentrations (BIC), the oral sugar test (OST), the combined glucose-insulin test (CGIT), and the frequently sampled insulin-modified intravenous glucose tolerance test (FSIGTT). Twelve healthy, light-breed horses. Randomized, prospective study. Each of the above tests was performed on 12 horses. Minimal model analysis of the FSIGTT was considered the reference standard and classified 7 horses as insulin resistant (IR) and 5 as insulin sensitive (IS). In contrast, BIC and OST assessment using conventional cut-off values classified all horses as IS. Kappa coefficients, measuring agreement among BIC, OST, CGIT, and FSIGTT were poor to fair. Sensitivity of the CGIT (positive phase duration of the glucose curve >45 minutes) was 85.7% and specificity was 40%, whereas CGIT ([insulin]45 >100 μIU/mL) sensitivity and specificity were 28.5% and 100%, respectively. Area under the glucose curve (AUCg0-120 ) was significantly correlated among the OST, CGIT, and FSIGTT, but Bland-Altman method and Lin's concordance coefficient showed a lack of agreement. Current criteria for diagnosis of insulin resistance using BIC and the OST are highly specific but lack sensitivity. The CGIT displayed better sensitivity and specificity, but modifications may be necessary to improve agreement with minimal model analysis. Copyright © 2016 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Mendoza, Maria C.B.; Burns, Trudy L.; Jones, Michael P.
2009-01-01
Objectives Case-deletion diagnostic methods are tools that allow identification of influential observations that may affect parameter estimates and model fitting conclusions. The goal of this paper was to develop two case-deletion diagnostics, the exact case deletion (ECD) and the empirical influence function (EIF), for detecting outliers that can affect results of sib-pair maximum likelihood quantitative trait locus (QTL) linkage analysis. Methods Subroutines to compute the ECD and EIF were incorporated into the maximum likelihood QTL variance estimation components of the linkage analysis program MAPMAKER/SIBS. Performance of the diagnostics was compared in simulation studies that evaluated the proportion of outliers correctly identified (sensitivity), and the proportion of non-outliers correctly identified (specificity). Results Simulations involving nuclear family data sets with one outlier showed EIF sensitivities approximated ECD sensitivities well for outlier-affected parameters. Sensitivities were high, indicating the outlier was identified a high proportion of the time. Simulations also showed the enormous computational time advantage of the EIF. Diagnostics applied to body mass index in nuclear families detected observations influential on the lod score and model parameter estimates. Conclusions The EIF is a practical diagnostic tool that has the advantages of high sensitivity and quick computation. PMID:19172086
Zhang, Yang; Shen, Jing; Li, Yu
2018-01-01
Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management. PMID:29342852
Zhang, Yang; Shen, Jing; Li, Yu
2018-01-13
Assessing and quantifying atmospheric vulnerability is a key issue in urban environmental protection and management. This paper integrated the Analytical hierarchy process (AHP), fuzzy synthesis evaluation and Geographic Information System (GIS) spatial analysis into an Exposure-Sensitivity-Adaptive capacity (ESA) framework to quantitatively assess atmospheric environment vulnerability in the Beijing-Tianjin-Hebei (BTH) region with spatial and temporal comparisons. The elaboration of the relationships between atmospheric environment vulnerability and indices of exposure, sensitivity, and adaptive capacity supports enable analysis of the atmospheric environment vulnerability. Our findings indicate that the atmospheric environment vulnerability of 13 cities in the BTH region exhibits obvious spatial heterogeneity, which is caused by regional diversity in exposure, sensitivity, and adaptive capacity indices. The results of atmospheric environment vulnerability assessment and the cause analysis can provide guidance to pick out key control regions and recognize vulnerable indicators for study sites. The framework developed in this paper can also be replicated at different spatial and temporal scales using context-specific datasets to support environmental management.
Comparison of two low sensitivity whiteners.
Callan, Richard S; Browning, William D; Downey, Mary C; Brackett, Martha G
2008-02-01
To evaluate two commercially available doctor-supplied, patient-applied, bleaching systems for their ability to whiten the maxillary anterior teeth while at the same time not causing sensitivity. 46 participants were randomly assigned to one of two groups: One group received Rembrandt Xtra-Comfort and the other group Nite White Excel 2Z. Bleaching stents were fabricated and the bleaching systems were used following manufacturers' instructions. Participants recorded tray use and any sensitivity on a daily basis. Participants bleached for 2 weeks followed by 2 weeks of no bleaching. Color was evaluated at the first, second and fourth week following the initial delivery of bleaching trays. Color change was measured using the Vita Classic Shade Guide arranged by value. As a group, participants in the NW2Z group bleached for 302 days with a total of 48 days (16%) of sensitivity recorded. The Rembrandt Xtra Comfort group bleached for 313 total days with 97 days (31%) of sensitivity recorded. The difference in sensitivity between the two products proved to be statistically significant (Chi-square analysis, P < or = 0.0001). The median shade change for both products following 2 weeks of active treatment was six tabs. At the 4-week evaluation, the median shade change was 5.5 and 6.0 tabs respectively for Rembrandt and Nite White. There was no statistical difference between the products in respect to shade change.
The Nature and Variability of Ensemble Sensitivity Fields that Diagnose Severe Convection
NASA Astrophysics Data System (ADS)
Ancell, B. C.
2017-12-01
Ensemble sensitivity analysis (ESA) is a statistical technique that uses information from an ensemble of forecasts to reveal relationships between chosen forecast metrics and the larger atmospheric state at various forecast times. A number of studies have employed ESA from the perspectives of dynamical interpretation, observation targeting, and ensemble subsetting toward improved probabilistic prediction of high-impact events, mostly at synoptic scales. We tested ESA using convective forecast metrics at the 2016 HWT Spring Forecast Experiment to understand the utility of convective ensemble sensitivity fields in improving forecasts of severe convection and its individual hazards. The main purpose of this evaluation was to understand the temporal coherence and general characteristics of convective sensitivity fields toward future use in improving ensemble predictability within an operational framework.The magnitude and coverage of simulated reflectivity, updraft helicity, and surface wind speed were used as response functions, and the sensitivity of these functions to winds, temperatures, geopotential heights, and dew points at different atmospheric levels and at different forecast times were evaluated on a daily basis throughout the HWT Spring Forecast experiment. These sensitivities were calculated within the Texas Tech real-time ensemble system, which possesses 42 members that run twice daily to 48-hr forecast time. Here we summarize both the findings regarding the nature of the sensitivity fields and the evaluation of the participants that reflects their opinions of the utility of operational ESA. The future direction of ESA for operational use will also be discussed.
Phosphorus component in AnnAGNPS
Yuan, Y.; Bingner, R.L.; Theurer, F.D.; Rebich, R.A.; Moore, P.A.
2005-01-01
The USDA Annualized Agricultural Non-Point Source Pollution model (AnnAGNPS) has been developed to aid in evaluation of watershed response to agricultural management practices. Previous studies have demonstrated the capability of the model to simulate runoff and sediment, but not phosphorus (P). The main purpose of this article is to evaluate the performance of AnnAGNPS on P simulation using comparisons with measurements from the Deep Hollow watershed of the Mississippi Delta Management Systems Evaluation Area (MDMSEA) project. A sensitivity analysis was performed to identify input parameters whose impact is the greatest on P yields. Sensitivity analysis results indicate that the most sensitive variables of those selected are initial soil P contents, P application rate, and plant P uptake. AnnAGNPS simulations of dissolved P yield do not agree well with observed dissolved P yield (Nash-Sutcliffe coefficient of efficiency of 0.34, R2 of 0.51, and slope of 0.24); however, AnnAGNPS simulations of total P yield agree well with observed total P yield (Nash-Sutcliffe coefficient of efficiency of 0.85, R2 of 0.88, and slope of 0.83). The difference in dissolved P yield may be attributed to limitations in model simulation of P processes. Uncertainties in input parameter selections also affect the model's performance.
Higashi, Akifumi; Dohi, Yoshihiro; Yamabe, Sayuri; Kinoshita, Hiroki; Sada, Yoshiharu; Kitagawa, Toshiro; Hidaka, Takayuki; Kurisu, Satoshi; Yamamoto, Hideya; Yasunobu, Yuji; Kihara, Yasuki
2017-11-01
Cardiopulmonary exercise testing (CPET) is useful for the evaluation of patients with suspected or confirmed pulmonary hypertension (PH). End-tidal carbon dioxide pressure (PETCO 2 ) during exercise is reduced with elevated pulmonary artery pressure. However, the utility of ventilatory parameters such as CPET for detecting PH remains unclear. We conducted a review in 155 patients who underwent right heart catheterization and CPET. Fifty-nine patients had PH [mean pulmonary arterial pressure (mPAP) ≥25 mmHg]. There was an inverse correlation between PETCO 2 at the anaerobic threshold (AT) and mPAP (r = -0.66; P < 0.01). Multiple regression analysis showed that PETCO 2 at the AT was independently associated with an elevated mPAP (P = 0.04). The sensitivity and specificity of CPET for PH were 80 and 86%, respectively, when the cut-off value identified by receiver operating characteristic curve analysis for PETCO 2 at the AT was ≤34.7 mmHg. A combination of echocardiography and CPET improved the sensitivity in detecting PH without markedly reducing specificity (sensitivity 87%, specificity 85%). Evaluation of PETCO 2 at the AT is useful for estimating pulmonary pressure. A combination of CPET and previous screening algorithms for PH may enhance the diagnostic ability of PH.
Cook, Karon F; Kallen, Michael A; Bombardier, Charles; Bamer, Alyssa M; Choi, Seung W; Kim, Jiseon; Salem, Rana; Amtmann, Dagmar
2017-01-01
To evaluate whether items of three measures of depressive symptoms function differently in persons with spinal cord injury (SCI) than in persons from a primary care sample. This study was a retrospective analysis of responses to the Patient Health Questionnaire depression scale, the Center for Epidemiological Studies Depression scale, and the National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS ® ) version 1.0 eight-item depression short form 8b (PROMIS-D). The presence of differential item function (DIF) was evaluated using ordinal logistic regression. No items of any of the three target measures were flagged for DIF based on standard criteria. In a follow-up sensitivity analyses, the criterion was changed to make the analysis more sensitive to potential DIF. Scores were corrected for DIF flagged under this criterion. Minimal differences were found between the original scores and those corrected for DIF under the sensitivity criterion. The three depression screening measures evaluated in this study did not perform differently in samples of individuals with SCI compared to general and community samples. Transdiagnostic symptoms did not appear to spuriously inflate depression severity estimates when administered to people with SCI.
A GC-MS method for the detection and quantitation of ten major drugs of abuse in human hair samples.
Orfanidis, A; Mastrogianni, O; Koukou, A; Psarros, G; Gika, H; Theodoridis, G; Raikos, N
2017-03-15
A sensitive analytical method has been developed in order to identify and quantify major drugs of abuse (DOA), namely morphine, codeine, 6-monoacetylmorphine, cocaine, ecgonine methyl ester, benzoylecgonine, amphetamine, methamphetamine, methylenedioxymethamphetamine and methylenedioxyamphetamine in human hair. Samples of hair were extracted with methanol under ultrasonication at 50°C after a three step rinsing process to remove external contamination and dirt hair. Derivatization with BSTFA was selected in order to increase detection sensitivity of GC/MS analysis. Optimization of derivatization parameters was based on experiments for the selection of derivatization time, temperature and volume of derivatising agent. Validation of the method included evaluation of linearity which ranged from 2 to 350ng/mg of hair mean concentration for all DOA, evaluation of sensitivity, accuracy, precision and repeatability. Limits of detection ranged from 0.05 to 0.46ng/mg of hair. The developed method was applied for the analysis of hair samples obtained from three human subjects and were found positive in cocaine, and opiates. Published by Elsevier B.V.
Holloway, Andrew J; Oshlack, Alicia; Diyagama, Dileepa S; Bowtell, David DL; Smyth, Gordon K
2006-01-01
Background Concerns are often raised about the accuracy of microarray technologies and the degree of cross-platform agreement, but there are yet no methods which can unambiguously evaluate precision and sensitivity for these technologies on a whole-array basis. Results A methodology is described for evaluating the precision and sensitivity of whole-genome gene expression technologies such as microarrays. The method consists of an easy-to-construct titration series of RNA samples and an associated statistical analysis using non-linear regression. The method evaluates the precision and responsiveness of each microarray platform on a whole-array basis, i.e., using all the probes, without the need to match probes across platforms. An experiment is conducted to assess and compare four widely used microarray platforms. All four platforms are shown to have satisfactory precision but the commercial platforms are superior for resolving differential expression for genes at lower expression levels. The effective precision of the two-color platforms is improved by allowing for probe-specific dye-effects in the statistical model. The methodology is used to compare three data extraction algorithms for the Affymetrix platforms, demonstrating poor performance for the commonly used proprietary algorithm relative to the other algorithms. For probes which can be matched across platforms, the cross-platform variability is decomposed into within-platform and between-platform components, showing that platform disagreement is almost entirely systematic rather than due to measurement variability. Conclusion The results demonstrate good precision and sensitivity for all the platforms, but highlight the need for improved probe annotation. They quantify the extent to which cross-platform measures can be expected to be less accurate than within-platform comparisons for predicting disease progression or outcome. PMID:17118209
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
Yu, Yuncui; Jia, Lulu; Meng, Yao; Hu, Lihua; Liu, Yiwei; Nie, Xiaolu; Zhang, Meng; Zhang, Xuan; Han, Sheng; Peng, Xiaoxia; Wang, Xiaoling
2018-04-01
Establishing a comprehensive clinical evaluation system is critical in enacting national drug policy and promoting rational drug use. In China, the 'Clinical Comprehensive Evaluation System for Pediatric Drugs' (CCES-P) project, which aims to compare drugs based on clinical efficacy and cost effectiveness to help decision makers, was recently proposed; therefore, a systematic and objective method is required to guide the process. An evidence-based multi-criteria decision analysis model that involved an analytic hierarchy process (AHP) was developed, consisting of nine steps: (1) select the drugs to be reviewed; (2) establish the evaluation criterion system; (3) determine the criterion weight based on the AHP; (4) construct the evidence body for each drug under evaluation; (5) select comparative measures and calculate the original utility score; (6) place a common utility scale and calculate the standardized utility score; (7) calculate the comprehensive utility score; (8) rank the drugs; and (9) perform a sensitivity analysis. The model was applied to the evaluation of three different inhaled corticosteroids (ICSs) used for asthma management in children (a total of 16 drugs with different dosage forms and strengths or different manufacturers). By applying the drug analysis model, the 16 ICSs under review were successfully scored and evaluated. Budesonide suspension for inhalation (drug ID number: 7) ranked the highest, with comprehensive utility score of 80.23, followed by fluticasone propionate inhaled aerosol (drug ID number: 16), with a score of 79.59, and budesonide inhalation powder (drug ID number: 6), with a score of 78.98. In the sensitivity analysis, the ranking of the top five and lowest five drugs remains unchanged, suggesting this model is generally robust. An evidence-based drug evaluation model based on AHP was successfully developed. The model incorporates sufficient utility and flexibility for aiding the decision-making process, and can be a useful tool for the CCES-P.
Trame, MN; Lesko, LJ
2015-01-01
A systems pharmacology model typically integrates pharmacokinetic, biochemical network, and systems biology concepts into a unifying approach. It typically consists of a large number of parameters and reaction species that are interlinked based upon the underlying (patho)physiology and the mechanism of drug action. The more complex these models are, the greater the challenge of reliably identifying and estimating respective model parameters. Global sensitivity analysis provides an innovative tool that can meet this challenge. CPT Pharmacometrics Syst. Pharmacol. (2015) 4, 69–79; doi:10.1002/psp4.6; published online 25 February 2015 PMID:27548289
Haley, Nicholas J.; Siepker, Chris; Hoon-Hanks , Laura L.; Mitchell, Gordon; Walter, W. David; Manca, Matteo; Monello, Ryan J.; Powers, Jenny G.; Wild, Margaret A.; Hoover, Edward A.; Caughey, Byron; Richt, Jürgen a.; Fenwick, B.W.
2016-01-01
Chronic wasting disease (CWD), a transmissible spongiform encephalopathy of cervids, was first documented nearly 50 years ago in Colorado and Wyoming and has since been detected across North America and the Republic of Korea. The expansion of this disease makes the development of sensitive diagnostic assays and antemortem sampling techniques crucial for the mitigation of its spread; this is especially true in cases of relocation/reintroduction or prevalence studies of large or protected herds, where depopulation may be contraindicated. This study evaluated the sensitivity of the real-time quaking-induced conversion (RT-QuIC) assay of recto-anal mucosa-associated lymphoid tissue (RAMALT) biopsy specimens and nasal brushings collected antemortem. These findings were compared to results of immunohistochemistry (IHC) analysis of ante- and postmortem samples. RAMALT samples were collected from populations of farmed and free-ranging Rocky Mountain elk (Cervus elaphus nelsoni; n = 323), and nasal brush samples were collected from a subpopulation of these animals (n = 205). We hypothesized that the sensitivity of RT-QuIC would be comparable to that of IHC analysis of RAMALT and would correspond to that of IHC analysis of postmortem tissues. We found RAMALT sensitivity (77.3%) to be highly correlative between RT-QuIC and IHC analysis. Sensitivity was lower when testing nasal brushings (34%), though both RAMALT and nasal brush test sensitivities were dependent on both the PRNP genotype and disease progression determined by the obex score. These data suggest that RT-QuIC, like IHC analysis, is a relatively sensitive assay for detection of CWD prions in RAMALT biopsy specimens and, with further investigation, has potential for large-scale and rapid automated testing of antemortem samples for CWD.
Wahlström, Helene; Comin, Arianna; Isaksson, Mats; Deplazes, Peter
2016-01-01
Introduction A semi-automated magnetic capture probe-based DNA extraction and real-time PCR method (MC-PCR), allowing for a more efficient large-scale surveillance of Echinococcus multilocularis occurrence, has been developed. The test sensitivity has previously been evaluated using the sedimentation and counting technique (SCT) as a gold standard. However, as the sensitivity of the SCT is not 1, test characteristics of the MC-PCR was also evaluated using latent class analysis, a methodology not requiring a gold standard. Materials and methods Test results, MC-PCR and SCT, from a previous evaluation of the MC-PCR using 177 foxes shot in the spring (n=108) and autumn 2012 (n=69) in high prevalence areas in Switzerland were used. Latent class analysis was used to estimate the test characteristics of the MC-PCR. Although it is not the primary aim of this study, estimates of the test characteristics of the SCT were also obtained. Results and discussion This study showed that the sensitivity of the MC-PCR was 0.88 [95% posterior credible interval (PCI) 0.80–0.93], which was not significantly different than the SCT, 0.83 (95% PCI 0.76–0.88), which is currently considered as the gold standard. The specificity of both tests was high, 0.98 (95% PCI 0.94–0.99) for the MC-PCR and 0.99 (95% PCI 0.99–1) for the SCT. In a previous study, using fox scats from a low prevalence area, the specificity of the MC-PCR was higher, 0.999% (95% PCI 0.997–1). One reason for the lower estimate of the specificity in this study could be that the MC-PCR detects DNA from infected but non-infectious rodents eaten by foxes. When using MC-PCR in low prevalence areas or areas free from the parasite, a positive result in the MC-PCR should be regarded as a true positive. Conclusion The sensitivity of the MC-PCR (0.88) was comparable to the sensitivity of SCT (0.83). PMID:26968153
Lee, Yeonok; Wu, Hulin
2012-01-01
Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.
Rastogi, S C; Lepoittevin, J P; Johansen, J D; Frosch, P J; Menné, T; Bruze, M; Dreier, B; Andersen, K E; White, I R
1998-12-01
Deodorants are one of the most frequently-used types of cosmetics and are a source of allergic contact dermatitis. Therefore, a gas chromatography - mass spectrometric analysis of 71 deodorants was performed for identification of fragrance and non-fragrance materials present in marketed deodorants. Futhermore, the sensitizing potential of these molecules was evaluated using structure activity relationships (SARs) analysis. This was based on the presence of 1 or more chemically reactive site(s), in the chemical structure, associated with sensitizing potential. Among the many different substances used to formulate cosmetic products (over 3500), 226 chemicals were identified in a sample of 71 deodorants. 84 molecules were found to contain at least 1 structural alert, and 70 to belong to, or be susceptible to being metabolized into, the chemical group of aldehydes, ketones and alpha,beta-unsaturated aldehydes, ketone or esters. The combination of GC-MS and SARs analysis could be helpful in the selection of substances for supplementary investigations regarding sensitizing properties. Thus, it may be a valuable tool in the management of contact allergy to deodorants and for producing new deodorants with decreased propensity to cause contact allergy.
Barbero, Umberto; Iannaccone, Mario; d'Ascenzo, Fabrizio; Barbero, Cristina; Mohamed, Abdirashid; Annone, Umberto; Benedetto, Sara; Celentani, Dario; Gagliardi, Marco; Moretti, Claudio; Gaita, Fiorenzo
2016-08-01
A non-invasive approach to define grafts patency and stenosis in the follow-up of coronary artery bypass graft (CABG) patients may be an interesting alternative to coronary angiography. 64-slice-coronary computed tomography is nowadays a diffused non-invasive method that permits an accurate evaluation of coronary stenosis, due to a high temporal and spatial resolution. However, its sensitivity and specificity in CABG evaluation has to be clearly defined, since published studies used different protocols and scanners. We collected all studies investigating patients with stable symptoms and previous CABG and reporting the comparison between diagnostic performances of invasive coronary angiography and 64-slice-coronary computed tomography. As a result, sensitivity and specificity of 64-slice-coronary computed tomography for CABG occlusion were 0.99 (95% CI 0.97-1.00) and 0.99 (95% CI: 0.99-1.00) with an area under the curve (AUC) of 0.99. 64-slice-coronary computed tomography sensitivity and specificity for the presence of any CABG stenosis >50% were 0.98 (95% CI: 0.97-0.99) and 0.98 (95% CI: 0.96-0.98), while AUC was 0.99. At meta-regression, neither the age nor the time from graft implantation had effect on sensitivity and specificity of 64-slice-coronary computed tomography detection of significant CABG stenosis or occlusion. In conclusion 64-slice-coronary computed tomography confirmed its high sensitivity and specificity in CABG stenosis or occlusion evaluation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
To explore the potential of exhaled breath analysis by Column Chromatography-Mass Spectrometry (GC-MS) as a non invasive and sensitive approach to evaluate mesenteric ischemia in pigs.
Domestic pigs (n=3) were anesthetized with Guaifenesin/ Fentanyl/ Ketamine/ Xylazine...
Sensitivity analysis of hybrid thermoelastic techniques
W.A. Samad; J.M. Considine
2017-01-01
Stress functions have been used as a complementary tool to support experimental techniques, such as thermoelastic stress analysis (TSA) and digital image correlation (DIC), in an effort to evaluate the complete and separate full-field stresses of loaded structures. The need for such coupling between experimental data and stress functions is due to the fact that...
Gender Differences in Performance of Script Analysis by Older Adults
ERIC Educational Resources Information Center
Helmes, E.; Bush, J. D.; Pike, D. L.; Drake, D. G.
2006-01-01
Script analysis as a test of executive functions is presumed sensitive to cognitive changes seen with increasing age. Two studies evaluated if gender differences exist in performance on scripts for familiar and unfamiliar tasks in groups of cognitively intact older adults. In Study 1, 26 older adults completed male and female stereotypical…
Evaluation of Contamination Inspection and Analysis Methods through Modeling System Performance
NASA Technical Reports Server (NTRS)
Seasly, Elaine; Dever, Jason; Stuban, Steven M. F.
2016-01-01
Contamination is usually identified as a risk on the risk register for sensitive space systems hardware. Despite detailed, time-consuming, and costly contamination control efforts during assembly, integration, and test of space systems, contaminants are still found during visual inspections of hardware. Improved methods are needed to gather information during systems integration to catch potential contamination issues earlier and manage contamination risks better. This research explores evaluation of contamination inspection and analysis methods to determine optical system sensitivity to minimum detectable molecular contamination levels based on IEST-STD-CC1246E non-volatile residue (NVR) cleanliness levels. Potential future degradation of the system is modeled given chosen modules representative of optical elements in an optical system, minimum detectable molecular contamination levels for a chosen inspection and analysis method, and determining the effect of contamination on the system. By modeling system performance based on when molecular contamination is detected during systems integration and at what cleanliness level, the decision maker can perform trades amongst different inspection and analysis methods and determine if a planned method is adequate to meet system requirements and manage contamination risk.
Haley, Nicholas J.; Siepker, Chris; Walter, W. David; Thomsen, Bruce V.; Greenlee, Justin J.; Lehmkuhl, Aaron D.; Richt, Jürgen a.
2016-01-01
Chronic wasting disease (CWD), a transmissible spongiform encephalopathy of cervids, was first documented nearly 50 years ago in Colorado and Wyoming and has since spread to cervids in 23 states, two Canadian provinces, and the Republic of Korea. The expansion of this disease makes the development of sensitive diagnostic assays and antemortem sampling techniques crucial for the mitigation of its spread; this is especially true in cases of relocation/reintroduction of farmed or free-ranging deer and elk or surveillance studies of private or protected herds, where depopulation is contraindicated. This study sought to evaluate the sensitivity of the real-time quaking-induced conversion (RT-QuIC) assay by using recto-anal mucosa-associated lymphoid tissue (RAMALT) biopsy specimens and nasal brush samples collected antemortem from farmed white-tailed deer (n = 409). Antemortem findings were then compared to results from ante- and postmortem samples (RAMALT, brainstem, and medial retropharyngeal lymph nodes) evaluated by using the current gold standard in vitro assay, immunohistochemistry (IHC) analysis. We hypothesized that the sensitivity of RT-QuIC would be comparable to IHC analysis in antemortem tissues and would correlate with both the genotype and the stage of clinical disease. Our results showed that RAMALT testing by RT-QuIC assay had the highest sensitivity (69.8%) compared to that of postmortem testing, with a specificity of >93.9%. These data suggest that RT-QuIC, like IHC analysis, is an effective assay for detection of PrPCWD in rectal biopsy specimens and other antemortem samples and, with further research to identify more sensitive tissues, bodily fluids, or experimental conditions, has potential for large-scale and rapid automated testing for CWD diagnosis.
Kocer, Derya; Sarıguzel, Fatma M; Karakukcu, Cıgdem
2014-08-01
The microscopic analysis of urine is essential for the diagnosis of patients with urinary tract infections. Quantitative urine culture is the 'gold standard' method for definitive diagnosis of urinary-tract infections, but it is labor-intensive, time consuming, and does not provide the same-day results. The aim of this study was to evaluate the analytical and diagnostic performance of the FUS200 (Changchun Dirui Industry, China), a new urine sedimentation analyzer in comparison to urine culture as the reference method. We evaluated 1000 urine samples, submitted for culture and urine analysis with a preliminary diagnosis of urinary-tract infection. Cut-off values for the FUS200 were determined by comparing the results with urine cultures. The cut-off values by the receiver operating characteristic (ROC) curve technique, sensitivity, and specificity were calculated for bacteria and white blood cells (WBCs). Among the 1000 urine specimens submitted for culture, 637 cultures (63.7%) were negative, and 363 were (36.3%) positive. The best cut-off values obtained from ROC analysis were 16/μL for bacteriuria (sensitivity: 82.3%, specificity: 58%), and 34/μL for WBCs (sensitivity: 72.3%, specificity: 65.2%). The area under the curve (AUC) for the bacteria and WBCs count were 0.71 (95% CI: 0.67-0.74) and, 0.72 (95% CI: 0.69-0.76) respectively. The most important requirement of a rapid diagnostic screening test is sensitivity, and, in this perspective, an unsatisfactory sensitivity by using bacteria recognition and quantification performed by the FUS200 analyzer has been observed. After further technical improvements in particle recognition and laboratory personnel training, the FUS200 might show better results.
Provençal, Simon; Bergeron, Onil; Leduc, Richard; Barrette, Nathalie
2016-04-01
The newly developed Universal Thermal Climate Index (UTCI), along with the physiological equivalent temperature (PET), the humidex (HX) and the wind chill index (WC), was calculated in Quebec City, Canada, a city with a strong seasonal climatic variability, over a 1-year period. The objective of this study is twofold: evaluate the operational benefits of implementing the UTCI for a climate monitoring program of public comfort and health awareness as opposed to relying on traditional and simple indices, and determine whether thermal comfort monitoring specific to dense urban neighborhoods is necessary to adequately fulfill the goals of the program. In order to do so, an analysis is performed to evaluate each of these indices' sensitivity to the meteorological variables that regulate them in different environments. Overall, the UTCI was found to be slightly more sensitive to mean radiant temperature, moderately more sensitive to humidity and much more sensitive to wind speed than the PET. This dynamic changed slightly depending on the environment and the season. In hot weather, the PET was found to be more sensitive to mean radiant temperature and therefore reached high values that could potentially be hazardous more frequently than the UTCI and the HX. In turn, the UTCI's stronger sensitivity to wind speed makes it a superior index to identify potentially hazardous weather in winter compared to the PET and the WC. Adopting the UTCI broadly would be an improvement over the traditionally popular HX and WC indices. The urban environment produced favorable conditions to sustain heat stress conditions, where the indices reached high values more frequently there than in suburban locations, which advocates for weather monitoring specific to denser urban areas.
Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study
Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley
2011-01-01
Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400
NASA Astrophysics Data System (ADS)
Provençal, Simon; Bergeron, Onil; Leduc, Richard; Barrette, Nathalie
2016-04-01
The newly developed Universal Thermal Climate Index (UTCI), along with the physiological equivalent temperature (PET), the humidex (HX) and the wind chill index (WC), was calculated in Quebec City, Canada, a city with a strong seasonal climatic variability, over a 1-year period. The objective of this study is twofold: evaluate the operational benefits of implementing the UTCI for a climate monitoring program of public comfort and health awareness as opposed to relying on traditional and simple indices, and determine whether thermal comfort monitoring specific to dense urban neighborhoods is necessary to adequately fulfill the goals of the program. In order to do so, an analysis is performed to evaluate each of these indices' sensitivity to the meteorological variables that regulate them in different environments. Overall, the UTCI was found to be slightly more sensitive to mean radiant temperature, moderately more sensitive to humidity and much more sensitive to wind speed than the PET. This dynamic changed slightly depending on the environment and the season. In hot weather, the PET was found to be more sensitive to mean radiant temperature and therefore reached high values that could potentially be hazardous more frequently than the UTCI and the HX. In turn, the UTCI's stronger sensitivity to wind speed makes it a superior index to identify potentially hazardous weather in winter compared to the PET and the WC. Adopting the UTCI broadly would be an improvement over the traditionally popular HX and WC indices. The urban environment produced favorable conditions to sustain heat stress conditions, where the indices reached high values more frequently there than in suburban locations, which advocates for weather monitoring specific to denser urban areas.
Caetano, Ana C; Santa-Cruz, André; Rolanda, Carla
2016-01-01
Background . Rome III criteria add physiological criteria to symptom-based criteria of chronic constipation (CC) for the diagnosis of defecatory disorders (DD). However, a gold-standard test is still lacking and physiological examination is expensive and time-consuming. Aim . Evaluate the usefulness of two low-cost tests-digital rectal examination (DRE) and balloon expulsion test (BET)-as screening or excluding tests of DD. Methods . We performed a systematic search in PUBMED and MEDLINE. We selected studies where constipated patients were evaluated by DRE or BET. Heterogeneity was assessed and random effect models were used to calculate the sensitivity, specificity, and negative predictive value (NPV) of the DRE and the BET. Results . Thirteen studies evaluating BET and four studies evaluating DRE (2329 patients) were selected. High heterogeneity ( I 2 > 80%) among studies was demonstrated. The studies evaluating the BET showed a sensitivity and specificity of 67% and 80%, respectively. Regarding the DRE, a sensitivity of 80% and specificity of 84% were calculated. NPV of 72% for the BET and NPV of 64% for the DRE were estimated. The sensitivity and specificity were similar when we restrict the analysis to studies using Rome criteria to define CC. The BET seems to perform better when a cut-off time of 2 minutes is used and when it is compared with a combination of physiological tests. Considering the DRE, strict criteria seem to improve the sensitivity but not the specificity of the test. Conclusion . Neither of the low-cost tests seems suitable for screening or excluding DD.
Standard Information Models for Representing Adverse Sensitivity Information in Clinical Documents.
Topaz, M; Seger, D L; Goss, F; Lai, K; Slight, S P; Lau, J J; Nandigam, H; Zhou, L
2016-01-01
Adverse sensitivity (e.g., allergy and intolerance) information is a critical component of any electronic health record system. While several standards exist for structured entry of adverse sensitivity information, many clinicians record this data as free text. This study aimed to 1) identify and compare the existing common adverse sensitivity information models, and 2) to evaluate the coverage of the adverse sensitivity information models for representing allergy information on a subset of inpatient and outpatient adverse sensitivity clinical notes. We compared four common adverse sensitivity information models: Health Level 7 Allergy and Intolerance Domain Analysis Model, HL7-DAM; the Fast Healthcare Interoperability Resources, FHIR; the Consolidated Continuity of Care Document, C-CDA; and OpenEHR, and evaluated their coverage on a corpus of inpatient and outpatient notes (n = 120). We found that allergy specialists' notes had the highest frequency of adverse sensitivity attributes per note, whereas emergency department notes had the fewest attributes. Overall, the models had many similarities in the central attributes which covered between 75% and 95% of adverse sensitivity information contained within the notes. However, representations of some attributes (especially the value-sets) were not well aligned between the models, which is likely to present an obstacle for achieving data interoperability. Also, adverse sensitivity exceptions were not well represented among the information models. Although we found that common adverse sensitivity models cover a significant portion of relevant information in the clinical notes, our results highlight areas needed to be reconciled between the standards for data interoperability.
ERIC Educational Resources Information Center
Johnson, Erin Phinney; Pennington, Bruce F.; Lowenstein, Joanna H.; Nittrouer, Susan
2011-01-01
Research Design;Intervention;Biology;Biotechnology;Teaching Methods;Hands on Science;Professional Development;Comparative Analysis;Genetics;Evaluation;Pretests Posttests;Control Groups;Science Education;Science Instruction;Pedagogical Content Knowledge;
Assessment of energy and economic performance of office building models: a case study
NASA Astrophysics Data System (ADS)
Song, X. Y.; Ye, C. T.; Li, H. S.; Wang, X. L.; Ma, W. B.
2016-08-01
Energy consumption of building accounts for more than 37.3% of total energy consumption while the proportion of energy-saving buildings is just 5% in China. In this paper, in order to save potential energy, an office building in Southern China was selected as a test example for energy consumption characteristics. The base building model was developed by TRNSYS software and validated against the recorded data from the field work in six days out of August-September in 2013. Sensitivity analysis was conducted for energy performance of building envelope retrofitting; five envelope parameters were analyzed for assessing the thermal responses. Results indicated that the key sensitivity factors were obtained for the heat-transfer coefficient of exterior walls (U-wall), infiltration rate and shading coefficient (SC), of which the sum sensitivity factor was about 89.32%. In addition, the results were evaluated in terms of energy and economic analysis. The analysis of sensitivity validated against some important results of previous studies. On the other hand, the cost-effective method improved the efficiency of investment management in building energy.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
In this study we evaluated specificity, distribution, and sensitivity of Bacteroidales – (PF163 and PigBac1) and methanogen-based (P23-2) assays proposed to detect swine fecal pollution in environmental waters. The assays were tested against 220 fecal DNA extracts derived from t...
Yan, Liping; Xiao, Heping; Zhang, Qing
2016-01-01
Technological advances in nucleic acid amplification have led to breakthroughs in the early detection of PTB compared to traditional sputum smear tests. The sensitivity and specificity of loop-mediated isothermal amplification (LAMP), simultaneous amplification testing (SAT), and Xpert MTB/RIF for the diagnosis of pulmonary tuberculosis were evaluated. A critical review of previous studies of LAMP, SAT, and Xpert MTB/RIF for the diagnosis of pulmonary tuberculosis that used laboratory culturing as the reference method was carried out together with a meta-analysis. In 25 previous studies, the pooled sensitivity and specificity of the diagnosis of tuberculosis were 93% and 94% for LAMP, 96% and 88% for SAT, and 89% and 98% for Xpert MTB/RIF. The I(2) values for the pooled data were >80%, indicating significant heterogeneity. In the smear-positive subgroup analysis of LAMP, the sensitivity increased from 93% to 98% (I(2) = 2.6%), and specificity was 68% (I(2) = 38.4%). In the HIV-infected subgroup analysis of Xpert MTB/RIF, the pooled sensitivity and specificity were 79% (I(2) = 72.9%) and 99% (I(2) = 64.4%). In the HIV-negative subgroup analysis for Xpert MTB/RIF, the pooled sensitivity and specificity were 72% (I(2) = 49.6%) and 99% (I(2) = 64.5%). LAMP, SAT and Xpert MTB/RIF had comparably high levels of sensitivity and specificity for the diagnosis of tuberculosis. The diagnostic sensitivity and specificity of three methods were similar, with LAMP being highly sensitive for the diagnosis of smear-positive PTB. The cost effectiveness of LAMP and SAT make them particularly suitable tests for diagnosing PTB in developing countries. Copyright © 2015 Elsevier Ltd. All rights reserved.
Evaluation of 5 different labeled polymer immunohistochemical detection systems.
Skaland, Ivar; Nordhus, Marit; Gudlaugsson, Einar; Klos, Jan; Kjellevold, Kjell H; Janssen, Emiel A M; Baak, Jan P A
2010-01-01
Immunohistochemical staining is important for diagnosis and therapeutic decision making but the results may vary when different detection systems are used. To analyze this, 5 different labeled polymer immunohistochemical detection systems, REAL EnVision, EnVision Flex, EnVision Flex+ (Dako, Glostrup, Denmark), NovoLink (Novocastra Laboratories Ltd, Newcastle Upon Tyne, UK) and UltraVision ONE (Thermo Fisher Scientific, Fremont, CA) were tested using 12 different, widely used mouse and rabbit primary antibodies, detecting nuclear, cytoplasmic, and membrane antigens. Serial sections of multitissue blocks containing 4% formaldehyde fixed paraffin embedded material were selected for their weak, moderate, and strong staining for each antibody. Specificity and sensitivity were evaluated by subjective scoring and digital image analysis. At optimal primary antibody dilution, digital image analysis showed that EnVision Flex+ was the most sensitive system (P < 0.005), with means of 8.3, 13.4, 20.2, and 41.8 gray scale values stronger staining than REAL EnVision, EnVision Flex, NovoLink, and UltraVision ONE, respectively. NovoLink was the second most sensitive system for mouse antibodies, but showed low sensitivity for rabbit antibodies. Due to low sensitivity, 2 cases with UltraVision ONE and 1 case with NovoLink stained false negatively. None of the detection systems showed any distinct false positivity, but UltraVision ONE and NovoLink consistently showed weak background staining both in negative controls and at optimal primary antibody dilution. We conclude that there are significant differences in sensitivity, specificity, costs, and total assay time in the immunohistochemical detection systems currently in use.
Alrajab, Saadah; Youssef, Asser M; Akkus, Nuri I; Caldito, Gloria
2013-09-23
Ultrasonography is being increasingly utilized in acute care settings with expanding applications. Pneumothorax evaluation by ultrasonography is a fast, safe, easy and inexpensive alternative to chest radiographs. In this review, we provide a comprehensive analysis of the current literature comparing ultrasonography and chest radiography for the diagnosis of pneumothorax. We searched English-language articles in MEDLINE, EMBASE and Cochrane Library dealing with both ultrasonography and chest radiography for diagnosis of pneumothorax. In eligible studies that met strict inclusion criteria, we conducted a meta-analysis to evaluate the diagnostic accuracy of pleural ultrasonography in comparison with chest radiography for the diagnosis of pneumothorax. We reviewed 601 articles and selected 25 original research articles for detailed review. Only 13 articles met all of our inclusion criteria and were included in the final analysis. One study used lung sliding sign alone, 12 studies used lung sliding and comet tail signs, and 6 studies searched for lung point in addition to the other two signs. Ultrasonography had a pooled sensitivity of 78.6% (95% CI, 68.1 to 98.1) and a specificity of 98.4% (95% CI, 97.3 to 99.5). Chest radiography had a pooled sensitivity of 39.8% (95% CI, 29.4 to 50.3) and a specificity of 99.3% (95% CI, 98.4 to 100). Our meta-regression and subgroup analyses indicate that consecutive sampling of patients compared to convenience sampling provided higher sensitivity results for both ultrasonography and chest radiography. Consecutive versus nonconsecutive sampling and trauma versus nontrauma settings were significant sources of heterogeneity. In addition, subgroup analysis showed significant variations related to operator and type of probe used. Our study indicates that ultrasonography is more accurate than chest radiography for detection of pneumothorax. The results support the previous investigations in this field, add new valuable information obtained from subgroup analysis, and provide accurate estimates for the performance parameters of both bedside ultrasonography and chest radiography for pneumothorax evaluation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
This report describes a risk study of the Browns Ferry, Unit 1, nuclear plant. The study is one of four such studies sponsored by the NRC Office of Research, Division of Risk Assessment, as part of its Interim Reliability Evaluation Program (IREP), Phase II. This report is contained in four volumes: a main report and three appendixes. Appendix C generally describes the methods used to estimate accident sequence frequency values. Information is presented concerning the approach, example collection, failure data, candidate dominant sequences, uncertainty analysis, and sensitivity analysis.
Lin, Yuning; Chen, Ziqian; Yang, Xizhang; Zhong, Qun; Zhang, Hongwen; Yang, Li; Xu, Shangwen; Li, Hui
2013-12-01
The aim of this study is to evaluate the diagnostic performance of multidetector CT angiography (CTA) in depicting bronchial and non-bronchial systemic arteries in patients with haemoptysis and to assess whether this modality helps determine the feasibility of angiographic embolisation. Fifty-two patients with haemoptysis between January 2010 and July 2011 underwent both preoperative multidetector CTA and digital subtraction angiography (DSA) imaging. Diagnostic performance of CTA in depicting arteries causing haemoptysis was assessed on a per-patient and a per-artery basis. The feasibility of the endovascular treatment evaluated by CTA was analysed. Sensitivity, specificity, and positive and negative predictive values for those analyses were determined. Fifty patients were included in the artery-presence-number analysis. In the per-patient analysis, neither CTA (P = 0.25) nor DSA (P = 1.00) showed statistical difference in the detection of arteries causing haemoptysis. The sensitivity, specificity, and positive and negative predictive values were 94%, 100%, 100%, and 40%, respectively, for the presence of pathologic arteries evaluated by CTA, and 98%, 100%, 100%, and 67%, respectively, for DSA. On the per-artery basis, CTA correctly identified 97% (107/110). Fifty-two patients were included in the feasibility analysis. The performance of CTA in predicting the feasibility of angiographic embolisation was not statistically different from the treatment performed (P = 1.00). The sensitivity, specificity, and positive and negative predictive values were 96%, 80%, 98% and 67%, respectively, for CTA. Multidetector CTA is an accurate imaging method in depicting the presence and number of arteries causing haemoptysis. This modality is also useful for determining the feasibility of angiographic embolisation for haemoptysis. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
NASA Astrophysics Data System (ADS)
Louka, Panagiota; Petropoulos, George; Papanikolaou, Ioannis
2015-04-01
The ability to map the spatiotemporal distribution of extreme climatic conditions, such as frost, is a significant tool in successful agricultural management and decision making. Nowadays, with the development of Earth Observation (EO) technology, it is possible to obtain accurately, timely and in a cost-effective way information on the spatiotemporal distribution of frost conditions, particularly over large and otherwise inaccessible areas. The present study aimed at developing and evaluating a frost risk prediction model, exploiting primarily EO data from MODIS and ASTER sensors and ancillary ground observation data. For the evaluation of our model, a region in north-western Greece was selected as test site and a detailed sensitivity analysis was implemented. The agreement between the model predictions and the observed (remotely sensed) frost frequency obtained by MODIS sensor was evaluated thoroughly. Also, detailed comparisons of the model predictions were performed against reference frost ground observations acquired from the Greek Agricultural Insurance Organization (ELGA) over a period of 10-years (2000-2010). Overall, results evidenced the ability of the model to produce reasonably well the frost conditions, following largely explainable patterns in respect to the study site and local weather conditions characteristics. Implementation of our proposed frost risk model is based primarily on satellite imagery analysis provided nowadays globally at no cost. It is also straightforward and computationally inexpensive, requiring much less effort in comparison for example to field surveying. Finally, the method is adjustable to be potentially integrated with other high resolution data available from both commercial and non-commercial vendors. Keywords: Sensitivity analysis, frost risk mapping, GIS, remote sensing, MODIS, Greece
Systems Analysis Of Advanced Coal-Based Power Plants
NASA Technical Reports Server (NTRS)
Ferrall, Joseph F.; Jennings, Charles N.; Pappano, Alfred W.
1988-01-01
Report presents appraisal of integrated coal-gasification/fuel-cell power plants. Based on study comparing fuel-cell technologies with each other and with coal-based alternatives and recommends most promising ones for research and development. Evaluates capital cost, cost of electricity, fuel consumption, and conformance with environmental standards. Analyzes sensitivity of cost of electricity to changes in fuel cost, to economic assumptions, and to level of technology. Recommends further evaluation of integrated coal-gasification/fuel-cell integrated coal-gasification/combined-cycle, and pulverized-coal-fired plants. Concludes with appendixes detailing plant-performance models, subsystem-performance parameters, performance goals, cost bases, plant-cost data sheets, and plant sensitivity to fuel-cell performance.
Improved profiling of estrogen metabolites by orbitrap LC/MS
Li, Xingnan; Franke, Adrian A.
2015-01-01
Estrogen metabolites are important biomarkers to evaluate cancer risks and metabolic diseases. Due to their low physiological levels, a sensitive and accurate method is required, especially for the quantitation of unconjugated forms of endogenous steroids and their metabolites in humans. Here, we evaluated various derivatives of estrogens for improved analysis by orbitrap LC/MS in human serum samples. A new chemical derivatization reagent was applied modifying phenolic steroids to form 1-methylimidazole-2-sulfonyl adducts. The method significantly improves the sensitivity 2–100 fold by full scan MS and targeted selected ion monitoring MS over other derivatization methods including, dansyl, picolinoyl, and pyridine-3-sulfonyl products. PMID:25543003
NASA Astrophysics Data System (ADS)
Ireland, Gareth; Petropoulos, George P.; Carlson, Toby N.; Purdy, Sarah
2015-04-01
Sensitivity analysis (SA) consists of an integral and important validatory check of a computer simulation model before it is used to perform any kind of analysis. In the present work, we present the results from a SA performed on the SimSphere Soil Vegetation Atmosphere Transfer (SVAT) model utilising a cutting edge and robust Global Sensitivity Analysis (GSA) approach, based on the use of the Gaussian Emulation Machine for Sensitivity Analysis (GEM-SA) tool. The sensitivity of the following model outputs was evaluated: the ambient CO2 concentration and the rate of CO2 uptake by the plant, the ambient O3 concentration, the flux of O3 from the air to the plant/soil boundary, and the flux of O3 taken up by the plant alone. The most sensitive model inputs for the majority of model outputs were related to the structural properties of vegetation, namely, the Leaf Area Index, Fractional Vegetation Cover, Cuticle Resistance and Vegetation Height. External CO2 in the leaf and the O3 concentration in the air input parameters also exhibited significant influence on model outputs. This work presents a very important step towards an all-inclusive evaluation of SimSphere. Indeed, results from this study contribute decisively towards establishing its capability as a useful teaching and research tool in modelling Earth's land surface interactions. This is of considerable importance in the light of the rapidly expanding use of this model worldwide, which also includes research conducted by various Space Agencies examining its synergistic use with Earth Observation data towards the development of operational products at a global scale. This research was supported by the European Commission Marie Curie Re-Integration Grant "TRANSFORM-EO". SimSphere is currently maintained and freely distributed by the Department of Geography and Earth Sciences at Aberystwyth University (http://www.aber.ac.uk/simsphere). Keywords: CO2 flux, ambient CO2, O3 flux, SimSphere, Gaussian process emulators, BACCO GEM-SA, TRANSFORM-EO.
Structural reliability methods: Code development status
NASA Astrophysics Data System (ADS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-05-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Structural reliability methods: Code development status
NASA Technical Reports Server (NTRS)
Millwater, Harry R.; Thacker, Ben H.; Wu, Y.-T.; Cruse, T. A.
1991-01-01
The Probabilistic Structures Analysis Method (PSAM) program integrates state of the art probabilistic algorithms with structural analysis methods in order to quantify the behavior of Space Shuttle Main Engine structures subject to uncertain loadings, boundary conditions, material parameters, and geometric conditions. An advanced, efficient probabilistic structural analysis software program, NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) was developed as a deliverable. NESSUS contains a number of integrated software components to perform probabilistic analysis of complex structures. A nonlinear finite element module NESSUS/FEM is used to model the structure and obtain structural sensitivities. Some of the capabilities of NESSUS/FEM are shown. A Fast Probability Integration module NESSUS/FPI estimates the probability given the structural sensitivities. A driver module, PFEM, couples the FEM and FPI. NESSUS, version 5.0, addresses component reliability, resistance, and risk.
Simental-Mendía, Luis E; Sahebkar, Amirhossein; Rodríguez-Morán, Martha; Guerrero-Romero, Fernando
2016-09-01
A systematic review and meta-analysis was conducted to evaluate the effect of oral magnesium supplementation on insulin sensitivity and glucose control in both diabetic and non-diabetic individuals. PubMed-Medline, SCOPUS, Web of Science and Google Scholar databases were searched (from inception to November 25, 2015) to identify RCTs evaluating the effect of magnesium on insulin sensitivity and glucose control. A random-effects model and generic inverse variance method were used to compensate for the heterogeneity of studies. Publication bias, sensitivity analysis, and meta-regression assessments were conducted using standard methods. The impact of magnesium supplementation on plasma concentrations of glucose, glycated hemoglobin (HbA1c), insulin, and HOMA-IR index was assessed in 22, 14, 12 and 10 treatment arms, respectively. A significant effect of magnesium supplementation was observed on HOMA-IR index (WMD: -0.67, 95% CI: -1.20, -0.14, p=0.013) but not on plasma glucose (WMD: -0.20mmol/L, 95% CI: -0.45, 0.05, p=0.119), HbA1c (WMD: 0.018mmol/L, 95% CI: -0.10, 0.13, p=0.756), and insulin (WMD: -2.22mmol/L, 95% CI: -9.62, 5.17, p=0.556). A subgroup analysis comparing magnesium supplementation durations of <4 months versus ≥4 months, exhibited a significant difference for fasting glucose concentrations (p<0.001) and HOMA-IR (p=0.001) in favor of the latter subgroup. Magnesium supplementation for ≥4 months significantly improves the HOMA-IR index and fasting glucose, in both diabetic and non-diabetic subjects. The present findings suggest that magnesium may be a beneficial supplement in glucose metabolic disorders. Copyright © 2016 Elsevier Ltd. All rights reserved.
Prabitha, Vasumathi Gopala; Suchetha, Sambasivan; Jayanthi, Jayaraj Lalitha; Baiju, Kamalasanan Vijayakumary; Rema, Prabhakaran; Anuraj, Koyippurath; Mathews, Anita; Sebastian, Paul; Subhash, Narayanan
2016-01-01
Diffuse reflectance (DR) spectroscopy is a non-invasive, real-time, and cost-effective tool for early detection of malignant changes in squamous epithelial tissues. The present study aims to evaluate the diagnostic power of diffuse reflectance spectroscopy for non-invasive discrimination of cervical lesions in vivo. A clinical trial was carried out on 48 sites in 34 patients by recording DR spectra using a point-monitoring device with white light illumination. The acquired data were analyzed and classified using multivariate statistical analysis based on principal component analysis (PCA) and linear discriminant analysis (LDA). Diagnostic accuracies were validated using random number generators. The receiver operating characteristic (ROC) curves were plotted for evaluating the discriminating power of the proposed statistical technique. An algorithm was developed and used to classify non-diseased (normal) from diseased sites (abnormal) with a sensitivity of 72 % and specificity of 87 %. While low-grade squamous intraepithelial lesion (LSIL) could be discriminated from normal with a sensitivity of 56 % and specificity of 80 %, and high-grade squamous intraepithelial lesion (HSIL) from normal with a sensitivity of 89 % and specificity of 97 %, LSIL could be discriminated from HSIL with 100 % sensitivity and specificity. The areas under the ROC curves were 0.993 (95 % confidence interval (CI) 0.0 to 1) and 1 (95 % CI 1) for the discrimination of HSIL from normal and HSIL from LSIL, respectively. The results of the study show that DR spectroscopy could be used along with multivariate analytical techniques as a non-invasive technique to monitor cervical disease status in real time.
Custodio, Nilton; Alva-Diaz, Carlos; Becerra-Becerra, Yahaira; Montesinos, Rosa; Lira, David; Herrera-Pérez, Eder; Castro-Suárez, Sheila; Cuenca-Alfaro, José; Valeriano-Lorenzo, Elizabeth
2016-01-01
Evaluate the performance of clock drawing test- Manos versión (PDR-M) and Mini Mental State Examination -Peruvian version (MMSE) to detect dementia in a sample based on urban community of Lima, Peru. This study is a secondary analysis database, observational, analytical and cross-sectional, the gold standard was the clinical and the neuropsychological evaluations together. Performance testing individually and in combination were evaluated.. Data were obtained from prevalence study conducted in 2008 in Cercado de Lima. MMSE performance for evaluation of patients with dementia of any kind showed sensitivity of 64,1%, specificity of 84,1%, PPV of 24.4%, NPV of 96.7%, PLR of 4,03 and NLR of 0,43. PDR-M showed sensitivity of 89,3%, specificity of 98,1%, PPV of 79.3%, NPV of 99.1%, PLR of 47,79 and NLR of 0,11. When both tests were applied, and at least one of them was positive, they showed sensitivity 98.1%, specificity 84.1%, PPV of 33.1%, NPV of 99.8%, PLR of 6,17 and NLR of 0,02. When performing separate analysis of Alzheimer-type dementia and non- Alzheimer dementia, the values of the parameters do not differ substantially from those obtained for dementia of any kind. The combination of MMSE and PDR-M show good discriminative ability to detect moderate and severe dementia in population living in urban community in Lima.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, E.P.; Johnson, K.I.; Simonen, F.A.
The Vessel Integrity Simulation Analysis (VISA-II) code was developed to allow calculations of the failure probability of a reactor pressure vessel subject to defined pressure/temperature transients. A version of the code, revised by Pacific Northwest Laboratory for the US Nuclear Regulatory Commission, was used to evaluate the sensitivities of calculated through-wall flaw probability to material, flaw and calculational assumptions. Probabilities were more sensitive to flaw assumptions than to material or calculational assumptions. Alternative flaw assumptions changed the probabilities by one to two orders of magnitude, whereas alternative material assumptions typically changed the probabilities by a factor of two or less.more » Flaw shape, flaw through-wall position and flaw inspection were sensitivities examined. Material property sensitivities included the assumed distributions in copper content and fracture toughness. Methods of modeling flaw propagation that were evaluated included arrest/reinitiation toughness correlations, multiple toughness values along the length of a flaw, flaw jump distance for each computer simulation and added error in estimating irradiated properties caused by the trend curve correlation error.« less
Wu, Henry M; Cordeiro, Soraia M; Harcourt, Brian H; Carvalho, Mariadaglorias; Azevedo, Jailton; Oliveira, Tainara Q; Leite, Mariela C; Salgado, Katia; Reis, Mitermayer G; Plikaytis, Brian D; Clark, Thomas A; Mayer, Leonard W; Ko, Albert I; Martin, Stacey W; Reis, Joice N
2013-01-22
Although cerebrospinal fluid (CSF) culture is the diagnostic reference standard for bacterial meningitis, its sensitivity is limited, particularly when antibiotics were previously administered. CSF Gram staining and real-time PCR are theoretically less affected by antibiotics; however, it is difficult to evaluate these tests with an imperfect reference standard. CSF from patients with suspected meningitis from Salvador, Brazil were tested with culture, Gram stain, and real-time PCR using S. pneumoniae, N. meningitidis, and H. influenzae specific primers and probes. An antibiotic detection disk bioassay was used to test for the presence of antibiotic activity in CSF. The diagnostic accuracy of tests were evaluated using multiple methods, including direct evaluation of Gram stain and real-time PCR against CSF culture, evaluation of real-time PCR against a composite reference standard, and latent class analysis modeling to evaluate all three tests simultaneously. Among 451 CSF specimens, 80 (17.7%) had culture isolation of one of the three pathogens (40 S. pneumoniae, 36 N. meningitidis, and 4 H. influenzae), and 113 (25.1%) were real-time PCR positive (51 S. pneumoniae, 57 N. meningitidis, and 5 H. influenzae). Compared to culture, real-time PCR sensitivity and specificity were 95.0% and 90.0%, respectively. In a latent class analysis model, the sensitivity and specificity estimates were: culture, 81.3% and 99.7%; Gram stain, 98.2% and 98.7%; and real-time PCR, 95.7% and 94.3%, respectively. Gram stain and real-time PCR sensitivity did not change significantly when there was antibiotic activity in the CSF. Real-time PCR and Gram stain were highly accurate in diagnosing meningitis caused by S. pneumoniae, N. meningitidis, and H. influenzae, though there were few cases of H. influenzae. Furthermore, real-time PCR and Gram staining were less affected by antibiotic presence and might be useful when antibiotics were previously administered. Gram staining, which is inexpensive and commonly available, should be encouraged in all clinical settings.
Sensitivity Analysis of Nuclide Importance to One-Group Neutron Cross Sections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sekimoto, Hiroshi; Nemoto, Atsushi; Yoshimura, Yoshikane
The importance of nuclides is useful when investigating nuclide characteristics in a given neutron spectrum. However, it is derived using one-group microscopic cross sections, which may contain large errors or uncertainties. The sensitivity coefficient shows the effect of these errors or uncertainties on the importance.The equations for calculating sensitivity coefficients of importance to one-group nuclear constants are derived using the perturbation method. Numerical values are also evaluated for some important cases for fast and thermal reactor systems.Many characteristics of the sensitivity coefficients are derived from the derived equations and numerical results. The matrix of sensitivity coefficients seems diagonally dominant. However,more » it is not always satisfied in a detailed structure. The detailed structure of the matrix and the characteristics of coefficients are given.By using the obtained sensitivity coefficients, some demonstration calculations have been performed. The effects of error and uncertainty of nuclear data and of the change of one-group cross-section input caused by fuel design changes through the neutron spectrum are investigated. These calculations show that the sensitivity coefficient is useful when evaluating error or uncertainty of nuclide importance caused by the cross-section data error or uncertainty and when checking effectiveness of fuel cell or core design change for improving neutron economy.« less
Evaluation and construction of diagnostic criteria for inclusion body myositis
Mammen, Andrew L.; Amato, Anthony A.; Weiss, Michael D.; Needham, Merrilee
2014-01-01
Objective: To use patient data to evaluate and construct diagnostic criteria for inclusion body myositis (IBM), a progressive disease of skeletal muscle. Methods: The literature was reviewed to identify all previously proposed IBM diagnostic criteria. These criteria were applied through medical records review to 200 patients diagnosed as having IBM and 171 patients diagnosed as having a muscle disease other than IBM by neuromuscular specialists at 2 institutions, and to a validating set of 66 additional patients with IBM from 2 other institutions. Machine learning techniques were used for unbiased construction of diagnostic criteria. Results: Twenty-four previously proposed IBM diagnostic categories were identified. Twelve categories all performed with high (≥97%) specificity but varied substantially in their sensitivities (11%–84%). The best performing category was European Neuromuscular Centre 2013 probable (sensitivity of 84%). Specialized pathologic features and newly introduced strength criteria (comparative knee extension/hip flexion strength) performed poorly. Unbiased data-directed analysis of 20 features in 371 patients resulted in construction of higher-performing data-derived diagnostic criteria (90% sensitivity and 96% specificity). Conclusions: Published expert consensus–derived IBM diagnostic categories have uniformly high specificity but wide-ranging sensitivities. High-performing IBM diagnostic category criteria can be developed directly from principled unbiased analysis of patient data. Classification of evidence: This study provides Class II evidence that published expert consensus–derived IBM diagnostic categories accurately distinguish IBM from other muscle disease with high specificity but wide-ranging sensitivities. PMID:24975859
Yeo, Zhen Xuan; Wong, Joshua Chee Leong; Rozen, Steven G; Lee, Ann Siew Gek
2014-06-24
The Ion Torrent PGM is a popular benchtop sequencer that shows promise in replacing conventional Sanger sequencing as the gold standard for mutation detection. Despite the PGM's reported high accuracy in calling single nucleotide variations, it tends to generate many false positive calls in detecting insertions and deletions (indels), which may hinder its utility for clinical genetic testing. Recently, the proprietary analytical workflow for the Ion Torrent sequencer, Torrent Suite (TS), underwent a series of upgrades. We evaluated three major upgrades of TS by calling indels in the BRCA1 and BRCA2 genes. Our analysis revealed that false negative indels could be generated by TS under both default calling parameters and parameters adjusted for maximum sensitivity. However, indel calling with the same data using the open source variant callers, GATK and SAMtools showed that false negatives could be minimised with the use of appropriate bioinformatics analysis. Furthermore, we identified two variant calling measures, Quality-by-Depth (QD) and VARiation of the Width of gaps and inserts (VARW), which substantially reduced false positive indels, including non-homopolymer associated errors without compromising sensitivity. In our best case scenario that involved the TMAP aligner and SAMtools, we achieved 100% sensitivity, 99.99% specificity and 29% False Discovery Rate (FDR) in indel calling from all 23 samples, which is a good performance for mutation screening using PGM. New versions of TS, BWA and GATK have shown improvements in indel calling sensitivity and specificity over their older counterpart. However, the variant caller of TS exhibits a lower sensitivity than GATK and SAMtools. Our findings demonstrate that although indel calling from PGM sequences may appear to be noisy at first glance, proper computational indel calling analysis is able to maximize both the sensitivity and specificity at the single base level, paving the way for the usage of this technology for future clinical genetic testing.
The effective integration of analysis, modeling, and simulation tools.
DOT National Transportation Integrated Search
2013-08-01
The need for model integration arises from the recognition that both transportation decisionmaking and the tools supporting it continue to increase in complexity. Many strategies that agencies evaluate require using tools that are sensitive to supply...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Lijuan; Gonder, Jeff; Burton, Evan
This study evaluates the costs and benefits associated with the use of a stationary-wireless- power-transfer-enabled plug-in hybrid electric bus and determines the cost effectiveness relative to a conventional bus and a hybrid electric bus. A sensitivity sweep was performed over many different battery sizes, charging power levels, and number/location of bus stop charging stations. The net present cost was calculated for each vehicle design and provided the basis for design evaluation. In all cases, given the assumed economic conditions, the conventional bus achieved the lowest net present cost while the optimal plug-in hybrid electric bus scenario beat out the hybridmore » electric comparison scenario. The study also performed parameter sensitivity analysis under favorable and high unfavorable market penetration assumptions. The analysis identifies fuel saving opportunities with plug-in hybrid electric bus scenarios at cumulative net present costs not too dissimilar from those for conventional buses.« less
Multiple-foil microabrasion package (A0023)
NASA Technical Reports Server (NTRS)
Mcdonnell, J. A. M.; Ashworth, D. G.; Carey, W. C.; Flavill, R. P.; Jennison, R. C.
1984-01-01
The specific scientific objectives of this experiment are to measure the spatial distribution, size, velocity, radiance, and composition of microparticles in near-Earth space. The technological objectives are to measure erosion rates resulting from microparticle impacts and to evaluate thin-foil meteor 'bumpers'. The combinations of sensitivity and reliability in this experiment will provide up to 1000 impacts per month for laboratory analysis and will extend current sensitivity limits by 5 orders of magnitude in mass.
Widjaja, Elysa; Li, Bing; Schinkel, Corrine Davies; Puchalski Ritchie, Lisa; Weaver, James; Snead, O Carter; Rutka, James T; Coyte, Peter C
2011-03-01
Due to differences in epilepsy types and surgery, economic evaluations of epilepsy treatment in adults cannot be extrapolated to children. We evaluated the cost-effectiveness of epilepsy surgery compared to medical treatment in children with intractable epilepsy. Decision tree analysis was used to evaluate the cost-effectiveness of surgery relative to medical management. Fifteen patients had surgery and 15 had medical treatment. Cost data included inpatient and outpatient costs for the period April 2007 to September 2009, physician fee, and medication costs. Outcome measure was percentage seizure reduction at one-year follow-up. Incremental cost-effectiveness ratio (ICER) was assessed. Sensitivity analysis was performed for different probabilities of surgical and medical treatment outcomes and costs, and surgical mortality or morbidity. More patients managed surgically experienced Engel class I and II outcomes compared to medical treatment at one-year follow-up. Base-case analysis yielded an ICER of $369 per patient for each percentage reduction in seizures for the surgery group relative to medical group. Sensitivity analysis showed robustness for the different probabilities tested. Surgical treatment resulted in greater reduction in seizure frequency compared to medical therapy and was a cost-effective treatment option in children with intractable epilepsy who were evaluated for epilepsy surgery and subsequently underwent surgery compared to continuing medical therapy. However, larger sample size and long-term follow-up are needed to validate these findings. Copyright © 2011 Elsevier B.V. All rights reserved.
Dunphy, C H; Polski, J M; Evans, H L; Gardner, L J
2001-08-01
Immunophenotyping of bone marrow (BM) specimens with acute myelogenous leukemia (AML) may be performed by flow cytometric (FC) or immunohistochemical (IH) techniques. Some markers (CD34, CD15, and CD117) are available for both techniques. Myeloperoxidase (MPO) analysis may be performed by enzyme cytochemical (EC) or IH techniques. To determine the reliability of these markers and MPO by these techniques, we designed a study to compare the results of analyses of these markers and MPO by FC (CD34, CD15, and CD117), EC (MPO), and IH (CD34, CD15, CD117, and MPO) techniques. Twenty-nine AMLs formed the basis of the study. These AMLs all had been immunophenotyped previously by FC analysis; 27 also had had EC analysis performed. Of the AMLs, 29 had BM core biopsies and 26 had BM clots that could be evaluated. The paraffin blocks of the 29 BM core biopsies and 26 BM clots were stained for CD34, CD117, MPO, and CD15. These results were compared with results by FC analysis (CD34, CD15, and CD117) and EC analysis (MPO). Immunodetection of CD34 expression in AML had a similar sensitivity by FC and IH techniques. Immunodetection of CD15 and CD117 had a higher sensitivity by FC analysis than by IH analysis. Detection of MPO by IH analysis was more sensitive than by EC analysis. There was no correlation of French-American-British (FAB) subtype of AML with CD34 or CD117 expression. Expression of CD15 was associated with AMLs with a monocytic component. Myeloperoxidase reactivity by IH analysis was observed in AMLs originally FAB subtyped as M0. CD34 can be equally detected by FC and IH techniques. CD15 and CD117 are better detected by FC analysis and MPO is better detected by IH analysis.
On the sensitivity of TG-119 and IROC credentialing to TPS commissioning errors.
McVicker, Drew; Yin, Fang-Fang; Adamson, Justus D
2016-01-08
We investigate the sensitivity of IMRT commissioning using the TG-119 C-shape phantom and credentialing with the IROC head and neck phantom to treatment planning system commissioning errors. We introduced errors into the various aspects of the commissioning process for a 6X photon energy modeled using the analytical anisotropic algorithm within a commercial treatment planning system. Errors were implemented into the various components of the dose calculation algorithm including primary photons, secondary photons, electron contamination, and MLC parameters. For each error we evaluated the probability that it could be committed unknowingly during the dose algorithm commissioning stage, and the probability of it being identified during the verification stage. The clinical impact of each commissioning error was evaluated using representative IMRT plans including low and intermediate risk prostate, head and neck, mesothelioma, and scalp; the sensitivity of the TG-119 and IROC phantoms was evaluated by comparing dosimetric changes to the dose planes where film measurements occur and change in point doses where dosimeter measurements occur. No commissioning errors were found to have both a low probability of detection and high clinical severity. When errors do occur, the IROC credentialing and TG 119 commissioning criteria are generally effective at detecting them; however, for the IROC phantom, OAR point-dose measurements are the most sensitive despite being currently excluded from IROC analysis. Point-dose measurements with an absolute dose constraint were the most effective at detecting errors, while film analysis using a gamma comparison and the IROC film distance to agreement criteria were less effective at detecting the specific commissioning errors implemented here.
Mathes, Tim; Jacobs, Esther; Morfeld, Jana-Carina; Pieper, Dawid
2013-09-30
The number of Health Technology Assessment (HTA) agencies increases. One component of HTAs are economic aspects. To incorporate economic aspects commonly economic evaluations are performed. A convergence of recommendations for methods of health economic evaluations between international HTA agencies would facilitate the adaption of results to different settings and avoid unnecessary expense. A first step in this direction is a detailed analysis of existing similarities and differences in recommendations to identify potential for harmonization. The objective is to provide an overview and comparison of the methodological recommendations of international HTA agencies for economic evaluations. The webpages of 127 international HTA agencies were searched for guidelines containing recommendations on methods for the preparation of economic evaluations. Additionally, the HTA agencies were requested information on methods for economic evaluations. Recommendations of the included guidelines were extracted in standardized tables according to 13 methodological aspects. All process steps were performed independently by two reviewers. Finally 25 publications of 14 HTA agencies were included in the analysis. Methods for economic evaluations vary widely. The greatest accordance could be found for the type of analysis and comparator. Cost-utility-analyses or cost-effectiveness-analyses are recommended. The comparator should continuously be usual care. Again the greatest differences were shown in the recommendations on the measurement/sources of effects, discounting and in the analysis of sensitivity. The main difference regarding effects is the focus either on efficacy or effectiveness. Recommended discounting rates range from 1.5%-5% for effects and 3%-5% for costs whereby it is mostly recommended to use the same rate for costs and effects. With respect to the analysis of sensitivity the main difference is that oftentimes the probabilistic or deterministic approach is recommended exclusively. Methods for modeling are only described vaguely and mainly with the rational that the "appropriate model" depends on the decision problem. Considering all other aspects a comparison is challenging as recommendations vary regarding detailedness and addressed issues. There is a considerable unexplainable variance in recommendations. Further effort is needed to harmonize methods for preparing economic evaluations.
NASA Astrophysics Data System (ADS)
Jeon, Hosang; Kim, Hyunduk; Cha, Bo Kyung; Kim, Jong Yul; Cho, Gyuseong; Chung, Yong Hyun; Yun, Jong-Il
2009-06-01
Presently, the gamma camera system is widely used in various medical diagnostic, industrial and environmental fields. Hence, the quantitative and effective evaluation of its imaging performance is essential for design and quality assurance. The National Electrical Manufacturers Association (NEMA) standards for gamma camera evaluation are insufficient to perform sensitive evaluation. In this study, modulation transfer function (MTF) and normalized noise power spectrum (NNPS) will be suggested to evaluate the performance of small gamma camera with changeable pinhole collimators using Monte Carlo simulation. We simulated the system with a cylinder and a disk source, and seven different pinhole collimators from 1- to 4-mm-diameter pinhole with lead. The MTF and NNPS data were obtained from output images and were compared with full-width at half-maximum (FWHM), sensitivity and differential uniformity. In the result, we found that MTF and NNPS are effective and novel standards to evaluate imaging performance of gamma cameras instead of conventional NEMA standards.
At room temperature (20°±3°C), purge and trap samplers provide poor sensitivity for analysis of the fuel oxygenates that are alcohols, such as tertiary butyl alcohol (TBA). Because alcohols are miscible or highly soluble in water, they are not efficiently transferred to a gas chr...
Sensitivity Analysis of Multiple Informant Models When Data Are Not Missing at Random
ERIC Educational Resources Information Center
Blozis, Shelley A.; Ge, Xiaojia; Xu, Shu; Natsuaki, Misaki N.; Shaw, Daniel S.; Neiderhiser, Jenae M.; Scaramella, Laura V.; Leve, Leslie D.; Reiss, David
2013-01-01
Missing data are common in studies that rely on multiple informant data to evaluate relationships among variables for distinguishable individuals clustered within groups. Estimation of structural equation models using raw data allows for incomplete data, and so all groups can be retained for analysis even if only 1 member of a group contributes…
ERIC Educational Resources Information Center
Faccenda, Lionel; Pantaleon, Nathalie
2011-01-01
Injustice appears to be a major variable in the analysis of transgressive behaviour. Theories and studies of injustice differ according to how injustice is conceptualised: contextually or personally. In the first case, the judgement of injustice results from an evaluation of situational characteristics (inequity, inequality, arbitrariness etc.).…
[Screening for cancer - economic consideration and cost-effectiveness].
Kjellberg, Jakob
2014-06-09
Cost-effectiveness analysis has become an accepted method to evaluate medical technology and allocate scarce health-care resources. Published decision analyses show that screening for cancer in general is cost-effective. However, cost-effectiveness analyses are only as good as the clinical data and the results are sensitive to the chosen methods and perspective of the analysis.
Sensitivity and Specificity of Polysomnographic Criteria for Defining Insomnia
Edinger, Jack D.; Ulmer, Christi S.; Means, Melanie K.
2013-01-01
Study Objectives: In recent years, polysomnography-based eligibility criteria have been increasingly used to identify candidates for insomnia research, and this has been particularly true of studies evaluating pharmacologic therapy for primary insomnia. However, the sensitivity and specificity of PSG for identifying individuals with insomnia is unknown, and there is no consensus on the criteria sets which should be used for participant selection. In the current study, an archival data set was used to test the sensitivity and specificity of PSG measures for identifying individuals with primary insomnia in both home and lab settings. We then evaluated the sensitivity and specificity of the eligibility criteria employed in a number of recent insomnia trials for identifying primary insomnia sufferers in our sample. Design: Archival data analysis. Settings: Study participants' homes and a clinical sleep laboratory. Participants: Adults: 76 with primary insomnia and 78 non-complaining normal sleepers. Measurements and Results: ROC and cross-tabs analyses were used to evaluate the sensitivity and specificity of PSG-derived total sleep time, latency to persistent sleep, wake after sleep onset, and sleep efficiency for discriminating adults with primary insomnia from normal sleepers. None of the individual criteria accurately discriminated PI from normal sleepers, and none of the criteria sets used in recent trials demonstrated acceptable sensitivity and specificity for identifying primary insomnia. Conclusions: The use of quantitative PSG-based selection criteria in insomnia research may exclude many who meet current diagnostic criteria for an insomnia disorder. Citation: Edinger JD; Ulmer CS; Means MK. Sensitivity and specificity of polysomnographic criteria for defining insomnia. J Clin Sleep Med 2013;9(5):481-491. PMID:23674940
A study of the stress wave factor technique for nondestructive evaluation of composite materials
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, A.; Kiernan, M. T.; Duke, J. C., Jr.; Henneke, E. G., II
1986-01-01
The acousto-ultrasonic method of nondestructive evaluation is an extremely sensitive means of assessing material response. Efforts continue to complete the understanding of this method. In order to achieve the full sensitivity of the technique, extreme care must be taken in its performance. This report provides an update of the efforts to advance the understanding of this method and to increase its application to the nondestructive evaluation of composite materials. Included are descriptions of a novel optical system that is capable of measuring in-plane and out-of-plane displacements, an IBM PC-based data acquisition system, an extensive data analysis software package, the azimuthal variation of acousto-ultrasonic behavior in graphite/epoxy laminates, and preliminary examination of processing variation in graphite-aluminum tubes.
Acoustics based assessment of respiratory diseases using GMM classification.
Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J
2010-01-01
The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.
Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest
Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie
2010-01-01
We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...
Statistical analysis and handling of missing data in cluster randomized trials: a systematic review.
Fiero, Mallorie H; Huang, Shuang; Oren, Eyal; Bell, Melanie L
2016-02-09
Cluster randomized trials (CRTs) randomize participants in groups, rather than as individuals and are key tools used to assess interventions in health research where treatment contamination is likely or if individual randomization is not feasible. Two potential major pitfalls exist regarding CRTs, namely handling missing data and not accounting for clustering in the primary analysis. The aim of this review was to evaluate approaches for handling missing data and statistical analysis with respect to the primary outcome in CRTs. We systematically searched for CRTs published between August 2013 and July 2014 using PubMed, Web of Science, and PsycINFO. For each trial, two independent reviewers assessed the extent of the missing data and method(s) used for handling missing data in the primary and sensitivity analyses. We evaluated the primary analysis and determined whether it was at the cluster or individual level. Of the 86 included CRTs, 80 (93%) trials reported some missing outcome data. Of those reporting missing data, the median percent of individuals with a missing outcome was 19% (range 0.5 to 90%). The most common way to handle missing data in the primary analysis was complete case analysis (44, 55%), whereas 18 (22%) used mixed models, six (8%) used single imputation, four (5%) used unweighted generalized estimating equations, and two (2%) used multiple imputation. Fourteen (16%) trials reported a sensitivity analysis for missing data, but most assumed the same missing data mechanism as in the primary analysis. Overall, 67 (78%) trials accounted for clustering in the primary analysis. High rates of missing outcome data are present in the majority of CRTs, yet handling missing data in practice remains suboptimal. Researchers and applied statisticians should carry out appropriate missing data methods, which are valid under plausible assumptions in order to increase statistical power in trials and reduce the possibility of bias. Sensitivity analysis should be performed, with weakened assumptions regarding the missing data mechanism to explore the robustness of results reported in the primary analysis.
dos Santos, Marcelo R.; Sayegh, Ana L.C.; Armani, Rafael; Costa-Hong, Valéria; de Souza, Francis R.; Toschi-Dias, Edgar; Bortolotto, Luiz A.; Yonamine, Mauricio; Negrão, Carlos E.; Alves, Maria-Janieire N.N.
2018-01-01
OBJECTIVES: Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. METHODS: Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. RESULTS: Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. CONCLUSIONS: Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population. PMID:29791601
Chon, Hye Sook; Marchion, Douglas C; Xiong, Yin; Chen, Ning; Bicaku, Elona; Stickles, Xiaomang Ba; Bou Zgheib, Nadim; Judson, Patricia L; Hakam, Ardeshir; Gonzalez-Bosquet, Jesus; Wenham, Robert M; Apte, Sachin M; Lancaster, Johnathan M
2012-01-01
To identify pathways that influence endometrial cancer (EC) cell sensitivity to cisplatin and to characterize the BCL2 antagonist of cell death (BAD) pathway as a therapeutic target to increase cisplatin sensitivity. Eight EC cell lines (Ishikawa, MFE296, RL 95-2, AN3CA, KLE, MFE280, MFE319, HEC-1-A) were subjected to Affymetrix Human U133A GeneChip expression analysis of approximately 22,000 probe sets. In parallel, endometrial cell line sensitivity to cisplatin was quantified by MTS assay, and IC(50) values were calculated. Pearson's correlation test was used to identify genes associated with response to cisplatin. Genes associated with cisplatin responsiveness were subjected to pathway analysis. The BAD pathway was identified and subjected to targeted modulation, and the effect on cisplatin sensitivity was evaluated. Pearson's correlation analysis identified 1443 genes associated with cisplatin resistance (P<0.05), which included representation of the BAD-apoptosis pathway. Small interfering RNA (siRNA) knockdown of BAD pathway protein phosphatase PP2C expression was associated with increased phosphorylated BAD (serine-155) levels and a parallel increase in cisplatin resistance in Ishikawa (P=0.004) and HEC-1-A (P=0.02) cell lines. In contrast, siRNA knockdown of protein kinase A expression increased cisplatin sensitivity in the Ishikawa (P=0.02) cell line. The BAD pathway influences EC cell sensitivity to cisplatin, likely via modulation of the phosphorylation status of the BAD protein. The BAD pathway represents an appealing therapeutic target to increase EC cell sensitivity to cisplatin. Copyright © 2011 Elsevier Inc. All rights reserved.
Santos, Marcelo R Dos; Sayegh, Ana L C; Armani, Rafael; Costa-Hong, Valéria; Souza, Francis R de; Toschi-Dias, Edgar; Bortolotto, Luiz A; Yonamine, Mauricio; Negrão, Carlos E; Alves, Maria-Janieire N N
2018-05-21
Misuse of anabolic androgenic steroids in athletes is a strategy used to enhance strength and skeletal muscle hypertrophy. However, its abuse leads to an imbalance in muscle sympathetic nerve activity, increased vascular resistance, and increased blood pressure. However, the mechanisms underlying these alterations are still unknown. Therefore, we tested whether anabolic androgenic steroids could impair resting baroreflex sensitivity and cardiac sympathovagal control. In addition, we evaluate pulse wave velocity to ascertain the arterial stiffness of large vessels. Fourteen male anabolic androgenic steroid users and 12 nonusers were studied. Heart rate, blood pressure, and respiratory rate were recorded. Baroreflex sensitivity was estimated by the sequence method, and cardiac autonomic control by analysis of the R-R interval. Pulse wave velocity was measured using a noninvasive automatic device. Mean spontaneous baroreflex sensitivity, baroreflex sensitivity to activation of the baroreceptors, and baroreflex sensitivity to deactivation of the baroreceptors were significantly lower in users than in nonusers. In the spectral analysis of heart rate variability, high frequency activity was lower, while low frequency activity was higher in users than in nonusers. Moreover, the sympathovagal balance was higher in users. Users showed higher pulse wave velocity than nonusers showing arterial stiffness of large vessels. Single linear regression analysis showed significant correlations between mean blood pressure and baroreflex sensitivity and pulse wave velocity. Our results provide evidence for lower baroreflex sensitivity and sympathovagal imbalance in anabolic androgenic steroid users. Moreover, anabolic androgenic steroid users showed arterial stiffness. Together, these alterations might be the mechanisms triggering the increased blood pressure in this population.
New infrastructure for studies of transmutation and fast systems concepts
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-09-01
In this work we report initial studies on a low power Accelerator-Driven System as a possible experimental facility for the measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
A low power ADS for transmutation studies in fast systems
NASA Astrophysics Data System (ADS)
Panza, Fabio; Firpo, Gabriele; Lomonaco, Guglielmo; Osipenko, Mikhail; Ricco, Giovanni; Ripani, Marco; Saracco, Paolo; Viberti, Carlo Maria
2017-12-01
In this work, we report studies on a fast low power accelerator driven system model as a possible experimental facility, focusing on its capabilities in terms of measurement of relevant integral nuclear quantities. In particular, we performed Monte Carlo simulations of minor actinides and fission products irradiation and estimated the fission rate within fission chambers in the reactor core and the reflector, in order to evaluate the transmutation rates and the measurement sensitivity. We also performed a photo-peak analysis of available experimental data from a research reactor, in order to estimate the expected sensitivity of this analysis method on the irradiation of samples in the ADS considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bozinovich, L.V.; Poyer, D.A.; Anderson, J.L.
1993-12-01
A sensitivity study was made of the potential market penetration of residential energy efficiency as energy service ratio (ESR) improvements occurred in minority households, by age of house. The study followed a Minority Energy Assessment Model analysis of the National Energy Strategy projections of household energy consumption and prices, with majority, black, and Hispanic subgroup divisions. Electricity and total energy consumption and expenditure patterns were evaluated when the households` ESR improvement followed a logistic negative growth (i.e., market penetration) path. Earlier occurrence of ESR improvements meant greater discounted savings over the 22-year period.
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Modified GMDH-NN algorithm and its application for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Song, Shufang; Wang, Lu
2017-11-01
Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.
Lansdorp-Vogelaar, Iris; van Ballegooijen, Marjolein; Boer, Rob; Zauber, Ann; Habbema, J Dik F
2009-06-01
Estimates of the fecal occult blood test (FOBT) (Hemoccult II) sensitivity differed widely between screening trials and led to divergent conclusions on the effects of FOBT screening. We used microsimulation modeling to estimate a preclinical colorectal cancer (CRC) duration and sensitivity for unrehydrated FOBT from the data of 3 randomized controlled trials of Minnesota, Nottingham, and Funen. In addition to 2 usual hypotheses on the sensitivity of FOBT, we tested a novel hypothesis where sensitivity is linked to the stage of clinical diagnosis in the situation without screening. We used the MISCAN-Colon microsimulation model to estimate sensitivity and duration, accounting for differences between the trials in demography, background incidence, and trial design. We tested 3 hypotheses for FOBT sensitivity: sensitivity is the same for all preclinical CRC stages, sensitivity increases with each stage, and sensitivity is higher for the stage in which the cancer would have been diagnosed in the absence of screening than for earlier stages. Goodness-of-fit was evaluated by comparing expected and observed rates of screen-detected and interval CRC. The hypothesis with a higher sensitivity in the stage of clinical diagnosis gave the best fit. Under this hypothesis, sensitivity of FOBT was 51% in the stage of clinical diagnosis and 19% in earlier stages. The average duration of preclinical CRC was estimated at 6.7 years. Our analysis corroborated a long duration of preclinical CRC, with FOBT most sensitive in the stage of clinical diagnosis. (c) 2009 American Cancer Society.
Lv, S; Zhao, J; Zhang, J; Kwon, S; Han, M; Bian, R; Fu, H; Zhang, Y; Pan, H
2014-08-01
In our previous study, tumor necrosis factor α (TNF-α) was identified as an effective target for sepsis patients (Int J Clin Pract, 68, 2014, 520). TNF-α in cerebrospinal fluid (CSF) was also investigated for its utility in the differential diagnosis of bacterial and aseptic meningitis. However, there has been neither definite nor convincing evidence so far. Here the overall diagnostic accuracy of TNF-α in differentiation between bacterial and aseptic meningitis was evaluated through the meta-analysis of diagnostic tests. The sensitivity, specificity and other measures of accuracy were pooled using random effect models. Summary receiver operating characteristic curves were used to assess overall test performance. Publication bias was evaluated using funnel plots, and sensitivity analysis was also introduced. A total of 21 studies involving bacterial meningitis (678) and aseptic meningitis (694) involved a total of 1372 patients. The pooled sensitivity and specificity for the TNF-α test were 0.83 [95% confidence interval (CI) 0.80-0.86, I(2) = 65.1] and 0.92 (95% CI 0.89-0.94, I(2) = 61.8), respectively. The positive likelihood ratio was 12.05 (95% CI 7.41-19.60, I(2) = 36.5), the negative likelihood ratio was 0.17 (95% CI 0.13-0.24, I(2) = 59.4), and TNF-α was significantly associated with bacterial meningitis, with a diagnostic odds ratio of 49.84 (95% CI 28.53-87.06, I(2) = 47.9). The overall accuracy of the TNF-α test was very high with the area under the curve 0.9317. Publication bias was absent, and sensitivity analysis suggested that our results were highly stable. Our meta-analysis suggested that TNF-α could be recommended as a useful marker for diagnosis of bacterial meningitis and differential diagnosis between bacterial and aseptic meningitis with high sensitivity and specificity. Thus, hospitals should be encouraged to conduct TNF-α tests in CSF after lumbar puncture. © 2014 The Author(s) European Journal of Neurology © 2014 EAN.
NASA Astrophysics Data System (ADS)
Chen, Li-si; Hu, Zhong-wen
2017-10-01
The image evaluation of an optical system is the core of optical design. Based on the analysis and comparison of the PSSN (Normalized Point Source Sensitivity) proposed in the image evaluation of the TMT (Thirty Meter Telescope) and the common image evaluation methods, the application of PSSN in the TMT WFOS (Wide Field Optical Spectrometer) is studied. It includes an approximate simulation of the atmospheric seeing, the effect of the displacement of M3 in the TMT on the PSSN of the system, the effect of the displacement of collimating mirror in the WFOS on the PSSN of the system, the relations between the PSSN and the zenith angle under different conditions of atmospheric turbulence, and the relation between the PSSN and the wavefront aberration. The results show that the PSSN is effective for the image evaluation of the TMT under a limited atmospheric seeing.
Sung, Ki Hyuk; Chung, Chin Youb; Lee, Kyoung Min; Lee, Seung Yeol; Choi, In Ho; Cho, Tae-Joon; Yoo, Won Joon; Park, Moon Seok
2014-01-01
This study aimed to determine the best treatment modality for coronal angular deformity of the knee joint in growing children using decision analysis. A decision tree was created to evaluate 3 treatment modalities for coronal angular deformity in growing children: temporary hemiepiphysiodesis using staples, percutaneous screws, or a tension band plate. A decision analysis model was constructed containing the final outcome score, probability of metal failure, and incomplete correction of deformity. The final outcome was defined as health-related quality of life and was used as a utility in the decision tree. The probabilities associated with each case were obtained by literature review, and health-related quality of life was evaluated by a questionnaire completed by 25 pediatric orthopedic experts. Our decision analysis model favored temporary hemiepiphysiodesis using a tension band plate over temporary hemiepiphysiodesis using percutaneous screws or stapling, with utilities of 0.969, 0.957, and 0.962, respectively. One-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was better than temporary hemiepiphysiodesis using percutaneous screws, when the overall complication rate of hemiepiphysiodesis using a tension band plate was lower than 15.7%. Two-way sensitivity analysis showed that hemiepiphysiodesis using a tension band plate was more beneficial than temporary hemiepiphysiodesis using percutaneous screws. PMID:25276801
Pantic, Igor; Dacic, Sanja; Brkic, Predrag; Lavrnja, Irena; Pantic, Senka; Jovanovic, Tomislav; Pekovic, Sanja
2014-10-01
This aim of this study was to assess the discriminatory value of fractal and grey level co-occurrence matrix (GLCM) analysis methods in standard microscopy analysis of two histologically similar brain white mass regions that have different nerve fiber orientation. A total of 160 digital micrographs of thionine-stained rat brain white mass were acquired using a Pro-MicroScan DEM-200 instrument. Eighty micrographs from the anterior corpus callosum and eighty from the anterior cingulum areas of the brain were analyzed. The micrographs were evaluated using the National Institutes of Health ImageJ software and its plugins. For each micrograph, seven parameters were calculated: angular second moment, inverse difference moment, GLCM contrast, GLCM correlation, GLCM variance, fractal dimension, and lacunarity. Using the Receiver operating characteristic analysis, the highest discriminatory value was determined for inverse difference moment (IDM) (area under the receiver operating characteristic (ROC) curve equaled 0.925, and for the criterion IDM≤0.610 the sensitivity and specificity were 82.5 and 87.5%, respectively). Most of the other parameters also showed good sensitivity and specificity. The results indicate that GLCM and fractal analysis methods, when applied together in brain histology analysis, are highly capable of discriminating white mass structures that have different axonal orientation.
Comparative Analysis of State Fish Consumption Advisories Targeting Sensitive Populations
Scherer, Alison C.; Tsuchiya, Ami; Younglove, Lisa R.; Burbacher, Thomas M.; Faustman, Elaine M.
2008-01-01
Objective Fish consumption advisories are issued to warn the public of possible toxicological threats from consuming certain fish species. Although developing fetuses and children are particularly susceptible to toxicants in fish, fish also contain valuable nutrients. Hence, formulating advice for sensitive populations poses challenges. We conducted a comparative analysis of advisory Web sites issued by states to assess health messages that sensitive populations might access. Data sources We evaluated state advisories accessed via the National Listing of Fish Advisories issued by the U.S. Environmental Protection Agency. Data extraction We created criteria to evaluate advisory attributes such as risk and benefit message clarity. Data synthesis All 48 state advisories issued at the time of this analysis targeted children, 90% (43) targeted pregnant women, and 58% (28) targeted women of childbearing age. Only six advisories addressed single contaminants, while the remainder based advice on 2–12 contaminants. Results revealed that advisories associated a dozen contaminants with specific adverse health effects. Beneficial health effects of any kind were specifically associated only with omega-3 fatty acids found in fish. Conclusions These findings highlight the complexity of assessing and communicating information about multiple contaminant exposure from fish consumption. Communication regarding potential health benefits conferred by specific fish nutrients was minimal and focused primarily on omega-3 fatty acids. This overview suggests some lessons learned and highlights a lack of both clarity and consistency in providing the breadth of information that sensitive populations such as pregnant women need to make public health decisions about fish consumption during pregnancy. PMID:19079708
Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller
2017-05-01
In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Allepuz Palau, Alejandro; Piñeiro Méndez, Pilar; Molina Hinojosa, José Carlos; Jou Ferre, Victoria; Gabarró Julià, Lourdes
2015-03-01
The complex chronic patient program (CCP) of the Alt Penedès aims to improve the coordination of care. The objective was to evaluate the relationship between the costs associated with the program, and its results in the form of avoided admissions. Dost-effectiveness analysis from the perspective of the health System based on a before-after study. Alt Penedès. Health services utilisation (hospital [admissions, emergency visits, day-care hospital] and primary care visits). CCP Program results were compared with those prior to its implementation. The cost assigned to each resource corresponded to the hospital CatSalut's concert and ICS fees for primary care. A sensitivity analysis using boot strapping was performed. The intervention was considered cost-effective if the incremental cost-effectiveness ratio (ICER) did not exceed the cost of admission (€ 1,742.01). 149 patients were included. Admissions dropped from 212 to 145. The ICER was €1,416.3 (94,892.9€/67). Sensitivity analysis showed that in 95% of cases the cost might vary between €70,847.3 and €121,882.5 and avoided admissions between 30 and 102. In 72.4% of the simulations the program was cost-effective. Sensitivity analysis showed that in most situations the PCC Program would be cost-effective, although in a percentage of cases the program could raise overall cost of care, despite always reducing the number of admissions. Copyright © 2013 Elsevier España, S.L.U. All rights reserved.
NASA Astrophysics Data System (ADS)
Partono, Windu; Pardoyo, Bambang; Atmanto, Indrastono Dwi; Azizah, Lisa; Chintami, Rouli Dian
2017-11-01
Fault is one of the dangerous earthquake sources that can cause building failure. A lot of buildings were collapsed caused by Yogyakarta (2006) and Pidie (2016) fault source earthquakes with maximum magnitude 6.4 Mw. Following the research conducted by Team for Revision of Seismic Hazard Maps of Indonesia 2010 and 2016, Lasem, Demak and Semarang faults are three closest earthquake sources surrounding Semarang. The ground motion from those three earthquake sources should be taken into account for structural design and evaluation. Most of tall buildings, with minimum 40 meter high, in Semarang were designed and constructed following the 2002 and 2012 Indonesian Seismic Code. This paper presents the result of sensitivity analysis research with emphasis on the prediction of deformation and inter-story drift of existing tall building within the city against fault earthquakes. The analysis was performed by conducting dynamic structural analysis of 8 (eight) tall buildings using modified acceleration time histories. The modified acceleration time histories were calculated for three fault earthquakes with magnitude from 6 Mw to 7 Mw. The modified acceleration time histories were implemented due to inadequate time histories data caused by those three fault earthquakes. Sensitivity analysis of building against earthquake can be predicted by evaluating surface response spectra calculated using seismic code and surface response spectra calculated from acceleration time histories from a specific earthquake event. If surface response spectra calculated using seismic code is greater than surface response spectra calculated from acceleration time histories the structure will stable enough to resist the earthquake force.
Abstraction and model evaluation in category learning.
Vanpaemel, Wolf; Storms, Gert
2010-05-01
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Development of a noise annoyance sensitivity scale
NASA Technical Reports Server (NTRS)
Bregman, H. L.; Pearson, R. G.
1972-01-01
Examining the problem of noise pollution from the psychological rather than the engineering view, a test of human sensitivity to noise was developed against the criterion of noise annoyance. Test development evolved from a previous study in which biographical, attitudinal, and personality data was collected on a sample of 166 subjects drawn from the adult community of Raleigh. Analysis revealed that only a small subset of the data collected was predictive of noise annoyance. Item analysis yielded 74 predictive items that composed the preliminary noise sensitivity test. This was administered to a sample of 80 adults who later rate the annoyance value of six sounds (equated in terms of peak sound pressure level) presented in a simulated home, living-room environment. A predictive model involving 20 test items was developed using multiple regression techniques, and an item weighting scheme was evaluated.
Goal-oriented sensitivity analysis for lattice kinetic Monte Carlo simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arampatzis, Georgios, E-mail: garab@math.uoc.gr; Department of Mathematics and Statistics, University of Massachusetts, Amherst, Massachusetts 01003; Katsoulakis, Markos A., E-mail: markos@math.umass.edu
2014-03-28
In this paper we propose a new class of coupling methods for the sensitivity analysis of high dimensional stochastic systems and in particular for lattice Kinetic Monte Carlo (KMC). Sensitivity analysis for stochastic systems is typically based on approximating continuous derivatives with respect to model parameters by the mean value of samples from a finite difference scheme. Instead of using independent samples the proposed algorithm reduces the variance of the estimator by developing a strongly correlated-“coupled”- stochastic process for both the perturbed and unperturbed stochastic processes, defined in a common state space. The novelty of our construction is that themore » new coupled process depends on the targeted observables, e.g., coverage, Hamiltonian, spatial correlations, surface roughness, etc., hence we refer to the proposed method as goal-oriented sensitivity analysis. In particular, the rates of the coupled Continuous Time Markov Chain are obtained as solutions to a goal-oriented optimization problem, depending on the observable of interest, by considering the minimization functional of the corresponding variance. We show that this functional can be used as a diagnostic tool for the design and evaluation of different classes of couplings. Furthermore, the resulting KMC sensitivity algorithm has an easy implementation that is based on the Bortz–Kalos–Lebowitz algorithm's philosophy, where events are divided in classes depending on level sets of the observable of interest. Finally, we demonstrate in several examples including adsorption, desorption, and diffusion Kinetic Monte Carlo that for the same confidence interval and observable, the proposed goal-oriented algorithm can be two orders of magnitude faster than existing coupling algorithms for spatial KMC such as the Common Random Number approach. We also provide a complete implementation of the proposed sensitivity analysis algorithms, including various spatial KMC examples, in a supplementary MATLAB source code.« less
Corte Rodríguez, M; Álvarez-Fernández García, R; Blanco, E; Bettmer, J; Montes-Bayón, M
2017-11-07
One of the main limitations to the Pt-therapy in cancer is the development of associated drug resistance that can be associated with a significant reduction of the intracellular platinum concentration. Thus, intracellular Pt concentration could be considered as a biomarker of cisplatin resistance. In this work, an alternative method to address intracellular Pt concentration in individual cells is explored to permit the evaluation of different cell models and alternative therapies in a relatively fast way. For this aim, total Pt analysis in single cells has been implemented using a total consumption nebulizer coupled to inductively coupled plasma mass spectrometric detection (ICP-MS). The efficiency of the proposed device has been evaluated in combination with flow cytometry and turned out to be around 25% (cells entering the ICP-MS from the cells in suspension). Quantitative uptake studies of a nontoxic Tb-containing compound by individual cells were conducted and the results compared to those obtained by bulk analysis of the same cells. Both sets of data were statistically comparable. Thus, final application of the developed methodology to the comparative uptake of Pt-species in cisplatin resistant and sensitive cell lines (A2780cis and A2780) was conducted. The results obtained revealed the potential of this analytical strategy to differentiate between different cell lines of different sensitivity to the drug which might be of high medical interest.
Barros Silva, Gyl Eanes; Costa, Roberto Silva; Ravinal, Roberto Cuan; Saraiva e Silva, Jucélia; Dantas, Marcio; Coimbra, Terezila Machado
2010-03-01
To demonstrate that the evaluation of erythrocyte dysmorphism by light microscopy with lowering of the condenser lens (LMLC) is useful to identify patients with a haematuria of glomerular or non-glomerular origin. A comparative double-blind study between phase contrast microscopy (PCM) and LMLC is reported to evaluate the efficacy of these techniques. Urine samples of 39 patients followed up for 9 months were analyzed, and classified as glomerular and non-glomerular haematuria. The different microscopic techniques were compared using receiver-operator curve (ROC) analysis and area under curve (AUC). Reproducibility was assessed by coefficient of variation (CV). Specific cut-offs were set for each method according to their best rate of specificity and sensitivity as follows: 30% for phase contrast microscopy and 40% for standard LMLC, reaching in the first method the rate of 95% and 100% of sensitivity and specificity, respectively, and in the second method the rate of 90% and 100% of sensitivity and specificity, respectively. In ROC analysis, AUC for PCM was 0.99 and AUC for LMLC was 0.96. The CV was very similar in glomerular haematuria group for PCM (35%) and LMLC (35.3%). LMLC proved to be effective in contributing to the direction of investigation of haematuria, toward the nephrological or urological side. This method can substitute PCM when this equipment is not available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uchida, M.; Ohta, Y.; Nakamura, N.
1995-08-01
Positron annihilation (PA) lineshape analysis is sensitive to detect microstructural defects such as vacancies and dislocations. The authors are developing a portable system and applying this technique to nuclear power plant material evaluations; fatigue damage in type 316 stainless steel and SA508 low alloy steel, and thermal embrittlement in duplex stainless steel. The PA technique was found to be sensitive in the early fatigue life (up to 10%), but showed a little sensitivity for later stages of the fatigue life in both type 316 stainless steel and SA508 ferritic steel. Type 316 steel showed a higher PA sensitivity than SA508more » since the initial SA508 microstructure already contained a high dislocation density in the as-received state. The PA parameter increased as a fraction of aging time in CF8M samples aged at 350 C and 400 C, but didn`t change much in CF8 samples.« less
Intraoperative analysis of sentinel lymph nodes by imprint cytology for cancer of the breast.
Shiver, Stephen A; Creager, Andrew J; Geisinger, Kim; Perrier, Nancy D; Shen, Perry; Levine, Edward A
2002-11-01
The utilization of lymphatic mapping techniques for breast carcinoma has made intraoperative evaluation of sentinel lymph nodes (SLN) attractive, because axillary lymph node dissection can be performed during the initial surgery if the SLN is positive. The optimal technique for rapid SLN assessment has not been determined. Both frozen sectioning and imprint cytology are used for rapid intraoperative SLN evaluation. A retrospective review of the intraoperative imprint cytology results of 133 SLN mapping procedures from 132 breast carcinoma patients was performed. SLN were evaluated intraoperatively by bisecting the lymph node and making imprints of each cut surface. Imprints were stained with hematoxylin and eosin (H&E) and Diff-Quik. Permanent sections were evaluated with up to four H&E stained levels and cytokeratin immunohistochemistry. Imprint cytology results were compared with final histologic results. Sensitivity and specificity of imprint cytology were 56% and 100%, respectively, producing a 100% positive predictive value and 88% negative predictive value. Imprint cytology was significantly more sensitive for macrometastasis than micrometastasis 87% versus 22% (P = 0.00007). Of 13 total false negatives, 11 were found to be due to sampling error and 2 due to errors in intraoperative interpretation. Both intraoperative interpretation errors involved a diagnosis of lobular breast carcinoma. The sensitivity and specificity of imprint cytology are similar to that of frozen section evaluation. Imprint cytology is therefore a viable alternative to frozen sectioning when intraoperative evaluation is required. If SLN micrometastasis is used to determine the need for further lymphadenectomy, more sensitive intraoperative methods will be needed to avoid a second operation.
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
DeltaSA tool for source apportionment benchmarking, description and sensitivity analysis
NASA Astrophysics Data System (ADS)
Pernigotti, D.; Belis, C. A.
2018-05-01
DeltaSA is an R-package and a Java on-line tool developed at the EC-Joint Research Centre to assist and benchmark source apportionment applications. Its key functionalities support two critical tasks in this kind of studies: the assignment of a factor to a source in factor analytical models (source identification) and the model performance evaluation. The source identification is based on the similarity between a given factor and source chemical profiles from public databases. The model performance evaluation is based on statistical indicators used to compare model output with reference values generated in intercomparison exercises. The references values are calculated as the ensemble average of the results reported by participants that have passed a set of testing criteria based on chemical profiles and time series similarity. In this study, a sensitivity analysis of the model performance criteria is accomplished using the results of a synthetic dataset where "a priori" references are available. The consensus modulated standard deviation punc gives the best choice for the model performance evaluation when a conservative approach is adopted.
Maia, Zuinara; Lírio, Monique; Mistro, Sóstenes; Mendes, Carlos Maurício Cardeal; Mehta, Sanjay R.; Badaro, Roberto
2012-01-01
Background The rK39 recombinant protein is derived from a specific antigen produced by the Leishmania donovani complex, and has been used in the last two decades for the serodiagnosis of visceral leishmaniasis. We present here a systematic review and meta-analysis of studies evaluating serologic assays to diagnose visceral leishmaniasis to determine the accuracy of rK39 antigen in comparison to the use of other antigen preparations. Methodology/Principal Findings A systematic review with meta-analysis of the literature was performed to compare the rK39 strip-test and ELISA formats against serological tests using promastigote antigens derived from whole or soluble parasites for Direct Aglutination Test (DAT), Indirect Immunofluorescence test (IFAT) and ELISA with a promastigote antigen preparation (p-ELISA). Gold standard diagnosis was defined by the demonstration of amastigotes on hematological specimens. A database search was performed on Medline, Lilacs, Scopus, Isi Web of Science, and Cochrane Library. Quality of data was assessed using the QUADAS questionnaire. A search of the electronic databases found 352 papers of which only 14 fulfilled the selection criteria. Three evaluated the rK39 ELISA, while 13 evaluated the rK39 immunochromatographic strip test. The summarized sensitivity for the rK39-ELISA was 92% followed by IFAT 88% and p-ELISA 87%. The summarized specificity for the three diagnostic tests was 81%, 90%, and 77%. Studies comparing the rK39 strip test with DAT found a similar sensitivity of 94%, although the DAT had a slightly higher specificity. The rK39 strip test was more sensitive and specific than the IFAT and p-ELISA. We did not detect any difference in the sensitivity and specificity between strips produced by different manufacturers. Conclusions The rK39 protein used either in a strip test or in an ELISA, and the DAT are the best choices for implementation of rapid, easy and efficient test for serodiagnosis of VL. PMID:22303488
Maia, Zuinara; Lírio, Monique; Mistro, Sóstenes; Mendes, Carlos Maurício Cardeal; Mehta, Sanjay R; Badaro, Roberto
2012-01-01
The rK39 recombinant protein is derived from a specific antigen produced by the Leishmania donovani complex, and has been used in the last two decades for the serodiagnosis of visceral leishmaniasis. We present here a systematic review and meta-analysis of studies evaluating serologic assays to diagnose visceral leishmaniasis to determine the accuracy of rK39 antigen in comparison to the use of other antigen preparations. A systematic review with meta-analysis of the literature was performed to compare the rK39 strip-test and ELISA formats against serological tests using promastigote antigens derived from whole or soluble parasites for Direct Aglutination Test (DAT), Indirect Immunofluorescence test (IFAT) and ELISA with a promastigote antigen preparation (p-ELISA). Gold standard diagnosis was defined by the demonstration of amastigotes on hematological specimens. A database search was performed on Medline, Lilacs, Scopus, Isi Web of Science, and Cochrane Library. Quality of data was assessed using the QUADAS questionnaire. A search of the electronic databases found 352 papers of which only 14 fulfilled the selection criteria. Three evaluated the rK39 ELISA, while 13 evaluated the rK39 immunochromatographic strip test. The summarized sensitivity for the rK39-ELISA was 92% followed by IFAT 88% and p-ELISA 87%. The summarized specificity for the three diagnostic tests was 81%, 90%, and 77%. Studies comparing the rK39 strip test with DAT found a similar sensitivity of 94%, although the DAT had a slightly higher specificity. The rK39 strip test was more sensitive and specific than the IFAT and p-ELISA. We did not detect any difference in the sensitivity and specificity between strips produced by different manufacturers. The rK39 protein used either in a strip test or in an ELISA, and the DAT are the best choices for implementation of rapid, easy and efficient test for serodiagnosis of VL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harding, R., E-mail: ruth.harding2@wales.nhs.uk; Trnková, P.; Lomax, A. J.
Purpose: Base of skull meningioma can be treated with both intensity modulated radiation therapy (IMRT) and spot scanned proton therapy (PT). One of the main benefits of PT is better sparing of organs at risk, but due to the physical and dosimetric characteristics of protons, spot scanned PT can be more sensitive to the uncertainties encountered in the treatment process compared with photon treatment. Therefore, robustness analysis should be part of a comprehensive comparison between these two treatment methods in order to quantify and understand the sensitivity of the treatment techniques to uncertainties. The aim of this work was tomore » benchmark a spot scanning treatment planning system for planning of base of skull meningioma and to compare the created plans and analyze their robustness to setup errors against the IMRT technique. Methods: Plans were produced for three base of skull meningioma cases: IMRT planned with a commercial TPS [Monaco (Elekta AB, Sweden)]; single field uniform dose (SFUD) spot scanning PT produced with an in-house TPS (PSI-plan); and SFUD spot scanning PT plan created with a commercial TPS [XiO (Elekta AB, Sweden)]. A tool for evaluating robustness to random setup errors was created and, for each plan, both a dosimetric evaluation and a robustness analysis to setup errors were performed. Results: It was possible to create clinically acceptable treatment plans for spot scanning proton therapy of meningioma with a commercially available TPS. However, since each treatment planning system uses different methods, this comparison showed different dosimetric results as well as different sensitivities to setup uncertainties. The results confirmed the necessity of an analysis tool for assessing plan robustness to provide a fair comparison of photon and proton plans. Conclusions: Robustness analysis is a critical part of plan evaluation when comparing IMRT plans with spot scanned proton therapy plans.« less
Quantitative image analysis of immunohistochemical stains using a CMYK color model
Pham, Nhu-An; Morrison, Andrew; Schwock, Joerg; Aviel-Ronen, Sarit; Iakovlev, Vladimir; Tsao, Ming-Sound; Ho, James; Hedley, David W
2007-01-01
Background Computer image analysis techniques have decreased effects of observer biases, and increased the sensitivity and the throughput of immunohistochemistry (IHC) as a tissue-based procedure for the evaluation of diseases. Methods We adapted a Cyan/Magenta/Yellow/Key (CMYK) model for automated computer image analysis to quantify IHC stains in hematoxylin counterstained histological sections. Results The spectral characteristics of the chromogens AEC, DAB and NovaRed as well as the counterstain hematoxylin were first determined using CMYK, Red/Green/Blue (RGB), normalized RGB and Hue/Saturation/Lightness (HSL) color models. The contrast of chromogen intensities on a 0–255 scale (24-bit image file) as well as compared to the hematoxylin counterstain was greatest using the Yellow channel of a CMYK color model, suggesting an improved sensitivity for IHC evaluation compared to other color models. An increase in activated STAT3 levels due to growth factor stimulation, quantified using the Yellow channel image analysis was associated with an increase detected by Western blotting. Two clinical image data sets were used to compare the Yellow channel automated method with observer-dependent methods. First, a quantification of DAB-labeled carbonic anhydrase IX hypoxia marker in 414 sections obtained from 138 biopsies of cervical carcinoma showed strong association between Yellow channel and positive color selection results. Second, a linear relationship was also demonstrated between Yellow intensity and visual scoring for NovaRed-labeled epidermal growth factor receptor in 256 non-small cell lung cancer biopsies. Conclusion The Yellow channel image analysis method based on a CMYK color model is independent of observer biases for threshold and positive color selection, applicable to different chromogens, tolerant of hematoxylin, sensitive to small changes in IHC intensity and is applicable to simple automation procedures. These characteristics are advantageous for both basic as well as clinical research in an unbiased, reproducible and high throughput evaluation of IHC intensity. PMID:17326824
Parametric sensitivity analysis of leachate transport simulations at landfills.
Bou-Zeid, E; El-Fadel, M
2004-01-01
This paper presents a case study in simulating leachate generation and transport at a 2000 ton/day landfill facility and assesses leachate migration away from the landfill in order to control associated environmental impacts, particularly on groundwater wells down gradient of the site. The site offers unique characteristics in that it is a former quarry converted to a landfill and is planned to have refuse depths that could reach 100 m, making it one of the deepest in the world. Leachate quantity and potential percolation into the subsurface are estimated using the Hydrologic Evaluation of Landfill Performance (HELP) model. A three-dimensional subsurface model (PORFLOW) was adopted to simulate ground water flow and contaminant transport away from the site. A comprehensive sensitivity analysis to leachate transport control parameters was also conducted. Sensitivity analysis suggests that changes in partition coefficient, source strength, aquifer hydraulic conductivity, and dispersivity have the most significant impact on model output indicating that these parameters should be carefully selected when similar modeling studies are performed. Copyright 2004 Elsevier Ltd.
Groff, Shannon C.; Loftin, Cynthia S.; Drummond, Frank; Bushmann, Sara; McGill, Brian J.
2016-01-01
Non-native honeybees historically have been managed for crop pollination, however, recent population declines draw attention to pollination services provided by native bees. We applied the InVEST Crop Pollination model, developed to predict native bee abundance from habitat resources, in Maine's wild blueberry crop landscape. We evaluated model performance with parameters informed by four approaches: 1) expert opinion; 2) sensitivity analysis; 3) sensitivity analysis informed model optimization; and, 4) simulated annealing (uninformed) model optimization. Uninformed optimization improved model performance by 29% compared to expert opinion-informed model, while sensitivity-analysis informed optimization improved model performance by 54%. This suggests that expert opinion may not result in the best parameter values for the InVEST model. The proportion of deciduous/mixed forest within 2000 m of a blueberry field also reliably predicted native bee abundance in blueberry fields, however, the InVEST model provides an efficient tool to estimate bee abundance beyond the field perimeter.
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty
2015-11-01
In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.
Tsao, Chia-Wen; Yang, Zhi-Jie
2015-10-14
Desorption/ionization on silicon (DIOS) is a high-performance matrix-free mass spectrometry (MS) analysis method that involves using silicon nanostructures as a matrix for MS desorption/ionization. In this study, gold nanoparticles grafted onto a nanostructured silicon (AuNPs-nSi) surface were demonstrated as a DIOS-MS analysis approach with high sensitivity and high detection specificity for glucose detection. A glucose sample deposited on the AuNPs-nSi surface was directly catalyzed to negatively charged gluconic acid molecules on a single AuNPs-nSi chip for MS analysis. The AuNPs-nSi surface was fabricated using two electroless deposition steps and one electroless etching step. The effects of the electroless fabrication parameters on the glucose detection efficiency were evaluated. Practical application of AuNPs-nSi MS glucose analysis in urine samples was also demonstrated in this study.
Dynamic analysis of process reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadle, L.J.; Lawson, L.O.; Noel, S.D.
1995-06-01
The approach and methodology of conducting a dynamic analysis is presented in this poster session in order to describe how this type of analysis can be used to evaluate the operation and control of process reactors. Dynamic analysis of the PyGas{trademark} gasification process is used to illustrate the utility of this approach. PyGas{trademark} is the gasifier being developed for the Gasification Product Improvement Facility (GPIF) by Jacobs-Siffine Engineering and Riley Stoker. In the first step of the analysis, process models are used to calculate the steady-state conditions and associated sensitivities for the process. For the PyGas{trademark} gasifier, the process modelsmore » are non-linear mechanistic models of the jetting fluidized-bed pyrolyzer and the fixed-bed gasifier. These process sensitivities are key input, in the form of gain parameters or transfer functions, to the dynamic engineering models.« less
A techno-economic assessment of grid connected photovoltaic system for hospital building in Malaysia
NASA Astrophysics Data System (ADS)
Mat Isa, Normazlina; Tan, Chee Wei; Yatim, AHM
2017-07-01
Conventionally, electricity in hospital building are supplied by the utility grid which uses mix fuel including coal and gas. Due to enhancement in renewable technology, many building shall moving forward to install their own PV panel along with the grid to employ the advantages of the renewable energy. This paper present an analysis of grid connected photovoltaic (GCPV) system for hospital building in Malaysia. A discussion is emphasized on the economic analysis based on Levelized Cost of Energy (LCOE) and total Net Present Post (TNPC) in regards with the annual interest rate. The analysis is performed using Hybrid Optimization Model for Electric Renewables (HOMER) software which give optimization and sensitivity analysis result. An optimization result followed by the sensitivity analysis also being discuss in this article thus the impact of the grid connected PV system has be evaluated. In addition, the benefit from Net Metering (NeM) mechanism also discussed.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Theoretical considerations of some nonlinear aspects of hypersonic panel flutter
NASA Technical Reports Server (NTRS)
Mcintosh, S. C., Jr.
1974-01-01
A research project to analyze the effects of hypersonic nonlinear aerodynamic loading on panel flutter is reported. The test equipment and procedures for conducting the tests are explained. The effects of aerodynamic linearities on stability were evaluated by determining constant-initial-energy amplitude-sensitive stability boundaries and comparing them with the corresponding linear stability boundaries. An attempt to develop an alternative method of analysis for systems where amplitude-sensitive instability is possible is presented.
Loomba, Rohit S; Shah, Parinda H; Nijhawan, Karan; Aggarwal, Saurabh; Arora, Rohit
2015-03-01
Increased cardiothoracic ratio noted on chest radiographs often prompts concern and further evaluation with additional imaging. This study pools available data assessing the utility of cardiothoracic ratio in predicting left ventricular dilation. A systematic review of the literature was conducted to identify studies comparing cardiothoracic ratio by chest x-ray to left ventricular dilation by echocardiography. Electronic databases were used to identify studies which were then assessed for quality and bias, with those with adequate quality and minimal bias ultimately being included in the pooled analysis. The pooled data were used to determine the sensitivity, specificity, positive predictive value and negative predictive value of cardiomegaly in predicting left ventricular dilation. A total of six studies consisting of 466 patients were included in this analysis. Cardiothoracic ratio had 83.3% sensitivity, 45.4% specificity, 43.5% positive predictive value and 82.7% negative predictive value. When a secondary analysis was conducted with a pediatric study excluded, a total of five studies consisting of 371 patients were included. Cardiothoracic ratio had 86.2% sensitivity, 25.2% specificity, 42.5% positive predictive value and 74.0% negative predictive value. Cardiothoracic ratio as determined by chest radiograph is sensitive but not specific for identifying left ventricular dilation. Cardiothoracic ratio also has a strong negative predictive value for identifying left ventricular dilation.
Nicoś, M; Krawczyk, P; Wojas-Krawczyk, K; Bożyk, A; Jarosz, B; Sawicki, M; Trojanowski, T; Milanowski, J
2017-12-01
RT-PCR technique has showed a promising value as pre-screening method for detection of mRNA containing abnormal ALK sequences, but its sensitivity and specificity is still discussable. Previously, we determined the incidence of ALK rearrangement in CNS metastases of NSCLC using IHC and FISH methods. We evaluated ALK gene rearrangement using two-step RT-PCR method with EML4-ALK Fusion Gene Detection Kit (Entrogen, USA). The studied group included 145 patients (45 females, 100 males) with CNS metastases of NSCLC and was heterogeneous in terms of histology and smoking status. 21% of CNS metastases of NSCLC (30/145) showed presence of mRNA containing abnormal ALK sequences. FISH and IHC tests confirmed the presence of ALK gene rearrangement and expression of ALK abnormal protein in seven patients with positive result of RT-PCR analysis (4.8% of all patients, 20% of RT-PCR positive patients). RT-PCR method compared to FISH analysis achieved 100% of sensitivity and only 82.7% of specificity. IHC method compared to FISH method indicated 100% of sensitivity and 97.8% of specificity. In comparison to IHC, RT-PCR showed identical sensitivity with high number of false positive results. Utility of RT-PCR technique in screening of ALK abnormalities and in qualification patients for molecularly targeted therapies needs further validation.
Gervais, Debra A.; Hartman, Rebecca I.; Harisinghani, Mukesh G.; Feldman, Adam S.; Mueller, Peter R.; Gazelle, G. Scott
2010-01-01
Purpose: To evaluate the effectiveness, cost, and cost-effectiveness of using renal mass biopsy to guide treatment decisions for small incidentally detected renal tumors. Materials and Methods: A decision-analytic Markov model was developed to estimate life expectancy and lifetime costs for patients with small (≤4-cm) renal tumors. Two strategies were compared: renal mass biopsy to triage patients to surgery or imaging surveillance and empiric nephron-sparing surgery. The model incorporated biopsy performance, the probability of track seeding with malignant cells, the prevalence and growth of benign and malignant tumors, treatment effectiveness and costs, and patient outcomes. An incremental cost-effectiveness analysis was performed to identify strategy preference under a willingness-to-pay threshold of $75 000 per quality-adjusted life-year (QALY). Effects of changes in key parameters on strategy preference were evaluated in sensitivity analysis. Results: Under base-case assumptions, the biopsy strategy yielded a minimally greater quality-adjusted life expectancy (4 days) than did empiric surgery at a lower lifetime cost ($3466), dominating surgery from a cost-effectiveness perspective. Over the majority of parameter ranges tested in one-way sensitivity analysis, the biopsy strategy dominated surgery or was cost-effective relative to surgery based on a $75 000-per-QALY willingness-to-pay threshold. In two-way sensitivity analysis, surgery yielded greater life expectancy when the prevalence of malignancy and propensity for biopsy-negative cancers to metastasize were both higher than expected or when the sensitivity and specificity of biopsy were both lower than expected. Conclusion: The use of biopsy to guide treatment decisions for small incidentally detected renal tumors is cost-effective and can prevent unnecessary surgery in many cases. © RSNA, 2010 Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.10092013/-/DC1 PMID:20720070
Longitudinal study of factors affecting taste sense decline in old-old individuals.
Ogawa, T; Uota, M; Ikebe, K; Arai, Y; Kamide, K; Gondo, Y; Masui, Y; Ishizaki, T; Inomata, C; Takeshita, H; Mihara, Y; Hatta, K; Maeda, Y
2017-01-01
The sense of taste plays a pivotal role for personal assessment of the nutritional value, safety and quality of foods. Although it is commonly recognised that taste sensitivity decreases with age, alterations in that sensitivity over time in an old-old population have not been previously reported. Furthermore, no known studies utilised comprehensive variables regarding taste changes and related factors for assessments. Here, we report novel findings from a 3-year longitudinal study model aimed to elucidate taste sensitivity decline and its related factors in old-old individuals. We utilised 621 subjects aged 79-81 years who participated in the Septuagenarians, Octogenarians, Nonagenarians Investigation with Centenarians Study for baseline assessments performed in 2011 and 2012, and then conducted follow-up assessments 3 years later in 328 of those. Assessment of general health, an oral examination and determination of taste sensitivity were performed for each. We also evaluated cognitive function using Montreal Cognitive Assessment findings, then excluded from analysis those with a score lower than 20 in order to secure the validity and reliability of the subjects' answers. Contributing variables were selected using univariate analysis, then analysed with multivariate logistic regression analysis. We found that males showed significantly greater declines in taste sensitivity for sweet and sour tastes than females. Additionally, subjects with lower cognitive scores showed a significantly greater taste decrease for salty in multivariate analysis. In conclusion, our longitudinal study revealed that gender and cognitive status are major factors affecting taste sensitivity in geriatric individuals. © 2016 John Wiley & Sons Ltd.
Testing alternative ground water models using cross-validation and other methods
Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.
2007-01-01
Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.
Lu, Xing; Jin, Xin; Yang, Suwei; Xia, Yanfei
2018-03-01
To comprehensively evaluate the associations between the depth of anesthesia and postoperative delirium (POD) or postoperative cognitive dysfunction (POCD). Using the Cochrane evaluation system, the included studies were conducted with quality assessment. We searched Cochrane library, Embase and PubMed databases without language restriction. The retrieval time is up to August 2017. According to the PRISMA guideline, the results associated with POCD and POD separately were compared between low and high bispectral index (BIS) groups under fixed effects model or random effects model. Besides, the risk ratio (RR) and 95% confidence intervals (95% CIs) were utilized as the effect sizes for merging the results. Furthermore, sensitivity analysis was performed to evaluate the stability of the results. Using Egger's test, publication bias was assessed for the included studies. Totally, 4 studies with high qualities were selected for this meta-analysis. The merged results of POCD showed no significant difference between low and high BIS groups (RR (95% CI)=0.84 (0.21, 3.45), P>0.05). Sensitivity analysis showed that the merged results of POCD were not stable (RR (95%CI)=0.41 (0.17, 0.99)-1.88 (1.09, 3.22), P=0.046). Additionally, no significant publication bias for POCD was found (P=0.385). There was no significant correlation between the depth of anesthesia and POCD. Copyright © 2017 Elsevier Inc. All rights reserved.
Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism
Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F
2017-01-01
Objective The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Methods Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). Results The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). Conclusion In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. Trial registration number NCT00986154. PMID:28689179
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Feyen, Luc; Gorelick, Steven M.
2005-03-01
We propose a framework that combines simulation optimization with Bayesian decision analysis to evaluate the worth of hydraulic conductivity data for optimal groundwater resources management in ecologically sensitive areas. A stochastic simulation optimization management model is employed to plan regionally distributed groundwater pumping while preserving the hydroecological balance in wetland areas. Because predictions made by an aquifer model are uncertain, groundwater supply systems operate below maximum yield. Collecting data from the groundwater system can potentially reduce predictive uncertainty and increase safe water production. The price paid for improvement in water management is the cost of collecting the additional data. Efficient data collection using Bayesian decision analysis proceeds in three stages: (1) The prior analysis determines the optimal pumping scheme and profit from water sales on the basis of known information. (2) The preposterior analysis estimates the optimal measurement locations and evaluates whether each sequential measurement will be cost-effective before it is taken. (3) The posterior analysis then revises the prior optimal pumping scheme and consequent profit, given the new information. Stochastic simulation optimization employing a multiple-realization approach is used to determine the optimal pumping scheme in each of the three stages. The cost of new data must not exceed the expected increase in benefit obtained in optimal groundwater exploitation. An example based on groundwater management practices in Florida aimed at wetland protection showed that the cost of data collection more than paid for itself by enabling a safe and reliable increase in production.
NASA Astrophysics Data System (ADS)
Chen, Dan; Luo, Zhaohui; Webber, Michael; Chen, Jing; Wang, Weiguang
2014-09-01
Emergy theory and method are used to evaluate the contribution of irrigation water, and the process of its utilization, in three agricultural systems. The agricultural systems evaluated in this study were rice, wheat, and oilseed rape productions in an irrigation pumping district of China. A corresponding framework for emergy evaluation and sensitivity analysis methods was proposed. Two new indices, the fraction of irrigation water ( FIW), and the irrigation intensity of agriculture ( IIA), were developed to depict the contribution of irrigation water. The calculated FIW indicated that irrigation water used for the rice production system (34.7%) contributed more than irrigation water used for wheat (5.3%) and oilseed rape (11.2%) production systems in a typical dry year. The wheat production with an IIA of 19.0 had the highest net benefit from irrigation compared to the rice (2.9) and oilseed rape (8.9) productions. The transformities of the systems' products represented different energy efficiencies for rice (2.50E + 05 sej·J-1), wheat (1.66E + 05 sej·J-1) and oilseed rape (2.14E + 05 sej·J-1) production systems. According to several emergy indices, of the three systems evaluated, the rice system had the greatest level of sustainability. However, all of them were less sustainable than the ecological agricultural systems. A sensitivity analysis showed that the emergy inputs of irrigation water and nitrogenous fertilizer were the highest sensitivity factors influencing the emergy ratios. Best Management Practices, and other agroecological strategies, could be implemented to make further improvements in the sustainability of the three systems.
Bellanger, Martine; Demeneix, Barbara; Grandjean, Philippe; Zoeller, R Thomas; Trasande, Leonardo
2015-04-01
Epidemiological studies and animal models demonstrate that endocrine-disrupting chemicals (EDCs) contribute to cognitive deficits and neurodevelopmental disabilities. The objective was to estimate neurodevelopmental disability and associated costs that can be reasonably attributed to EDC exposure in the European Union. An expert panel applied a weight-of-evidence characterization adapted from the Intergovernmental Panel on Climate Change. Exposure-response relationships and reference levels were evaluated for relevant EDCs, and biomarker data were organized from peer-reviewed studies to represent European exposure and approximate burden of disease. Cost estimation as of 2010 utilized lifetime economic productivity estimates, lifetime cost estimates for autism spectrum disorder, and annual costs for attention-deficit hyperactivity disorder. Setting, Patients and Participants, and Intervention: Cost estimation was carried out from a societal perspective, ie, including direct costs (eg, treatment costs) and indirect costs such as productivity loss. The panel identified a 70-100% probability that polybrominated diphenyl ether and organophosphate exposures contribute to IQ loss in the European population. Polybrominated diphenyl ether exposures were associated with 873,000 (sensitivity analysis, 148,000 to 2.02 million) lost IQ points and 3290 (sensitivity analysis, 3290 to 8080) cases of intellectual disability, at costs of €9.59 billion (sensitivity analysis, €1.58 billion to €22.4 billion). Organophosphate exposures were associated with 13.0 million (sensitivity analysis, 4.24 million to 17.1 million) lost IQ points and 59 300 (sensitivity analysis, 16,500 to 84,400) cases of intellectual disability, at costs of €146 billion (sensitivity analysis, €46.8 billion to €194 billion). Autism spectrum disorder causation by multiple EDCs was assigned a 20-39% probability, with 316 (sensitivity analysis, 126-631) attributable cases at a cost of €199 million (sensitivity analysis, €79.7 million to €399 million). Attention-deficit hyperactivity disorder causation by multiple EDCs was assigned a 20-69% probability, with 19 300 to 31 200 attributable cases at a cost of €1.21 billion to €2.86 billion. EDC exposures in Europe contribute substantially to neurobehavioral deficits and disease, with a high probability of >€150 billion costs/year. These results emphasize the advantages of controlling EDC exposure.
Abdalla, G; Fawzi Matuk, R; Venugopal, V; Verde, F; Magnuson, T H; Schweitzer, M A; Steele, K E
2015-08-01
To search the literature for further evidence for the use of magnetic resonance venography (MRV) in the detection of suspected DVT and to re-evaluate the accuracy of MRV in the detection of suspected deep vein thrombosis (DVT). PubMed, EMBASE, Scopus, Cochrane, and Web of Science were searched. Study quality and the risk of bias were evaluated using the QUADAS 2. A random effects meta-analysis including subgroup and sensitivity analyses were performed. The search resulted in 23 observational studies all from academic centres. Sixteen articles were included in the meta-analysis. The summary estimates for MRV as a diagnostic non-invasive tool revealed a sensitivity of 93% (95% confidence interval [CI]: 89% to 95%) and specificity of 96% (95% CI: 94% to 97%). The heterogeneity of the studies was high. Inconsistency (I2) for sensitivity and specificity was 80.7% and 77.9%, respectively. Further studies investigating the use of MRV in the detection of suspected DVT did not offer further evidence to support the replacement of ultrasound with MRV as the first-line investigation. However, MRV may offer an alternative tool in the detection/diagnosis of DVT for whom ultrasound is inadequate or not feasible (such as in the obese patient). Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Clinical Evaluation of a Loop-Mediated Amplification Kit for Diagnosis of Imported Malaria
Polley, Spencer D.; González, Iveth J.; Mohamed, Deqa; Daly, Rosemarie; Bowers, Kathy; Watson, Julie; Mewse, Emma; Armstrong, Margaret; Gray, Christen; Perkins, Mark D.; Bell, David; Kanda, Hidetoshi; Tomita, Norihiro; Kubota, Yutaka; Mori, Yasuyoshi; Chiodini, Peter L.; Sutherland, Colin J.
2013-01-01
Background. Diagnosis of malaria relies on parasite detection by microscopy or antigen detection; both fail to detect low-density infections. New tests providing rapid, sensitive diagnosis with minimal need for training would enhance both malaria diagnosis and malaria control activities. We determined the diagnostic accuracy of a new loop-mediated amplification (LAMP) kit in febrile returned travelers. Methods. The kit was evaluated in sequential blood samples from returned travelers sent for pathogen testing to a specialist parasitology laboratory. Microscopy was performed, and then malaria LAMP was performed using Plasmodium genus and Plasmodium falciparum–specific tests in parallel. Nested polymerase chain reaction (PCR) was performed on all samples as the reference standard. Primary outcome measures for diagnostic accuracy were sensitivity and specificity of LAMP results, compared with those of nested PCR. Results. A total of 705 samples were tested in the primary analysis. Sensitivity and specificity were 98.4% and 98.1%, respectively, for the LAMP P. falciparum primers and 97.0% and 99.2%, respectively, for the Plasmodium genus primers. Post hoc repeat PCR analysis of all 15 tests with discrepant results resolved 4 results in favor of LAMP, suggesting that the primary analysis had underestimated diagnostic accuracy. Conclusions. Malaria LAMP had a diagnostic accuracy similar to that of nested PCR, with a greatly reduced time to result, and was superior to expert microscopy. PMID:23633403
Farberg, Aaron S; Winkelmann, Richard R; Tucker, Natalie; White, Richard; Rigel, Darrell S
2017-09-01
BACKGROUND: Early diagnosis of melanoma is critical to survival. New technologies, such as a multi-spectral digital skin lesion analysis (MSDSLA) device [MelaFind, STRATA Skin Sciences, Horsham, Pennsylvania] may be useful to enhance clinician evaluation of concerning pigmented skin lesions. Previous studies evaluated the effect of only the binary output. OBJECTIVE: The objective of this study was to determine how decisions dermatologists make regarding pigmented lesion biopsies are impacted by providing both the underlying classifier score (CS) and associated probability risk provided by multi-spectral digital skin lesion analysis. This outcome was also compared against the improvement reported with the provision of only the binary output. METHODS: Dermatologists attending an educational conference evaluated 50 pigmented lesions (25 melanomas and 25 benign lesions). Participants were asked if they would biopsy the lesion based on clinical images, and were asked this question again after being shown multi-spectral digital skin lesion analysis data that included the probability graphs and classifier score. RESULTS: Data were analyzed from a total of 160 United States board-certified dermatologists. Biopsy sensitivity for melanoma improved from 76 percent following clinical evaluation to 92 percent after quantitative multi-spectral digital skin lesion analysis information was provided ( p <0.0001). Specificity improved from 52 percent to 79 percent ( p <0.0001). The positive predictive value increased from 61 percent to 81 percent ( p <0.01) when the quantitative data were provided. Negative predictive value also increased (68% vs. 91%, p<0.01), and overall biopsy accuracy was greater with multi-spectral digital skin lesion analysis (64% vs. 86%, p <0.001). Interrater reliability improved (intraclass correlation 0.466 before, 0.559 after). CONCLUSION: Incorporating the classifier score and probability data into physician evaluation of pigmented lesions led to both increased sensitivity and specificity, thereby resulting in more accurate biopsy decisions.
Evaluating desiccation sensitivity of northern red oak acorns using x-ray image analysis
Rosa C. Goodman; Douglass F. Jacobs
2005-01-01
Desiccation of northern red oak (Quercus rubra L.) acorns can have a major influence on seed viability. Recalcitrant behavior of northern red oak acorns was studied to examine the effects of moisture content (MC) on germination and early growth. Because it is rapid and non-destructive, X-ray image analysis was chosen to assess cotyledon damage in...
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V; Petway, Joy R
2017-07-12
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH₃-N and NO₃-N. Results indicate that the integrated FME-GLUE-based model, with good Nash-Sutcliffe coefficients (0.53-0.69) and correlation coefficients (0.76-0.83), successfully simulates the concentrations of ON-N, NH₃-N and NO₃-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH₃-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO₃-N simulation, which was measured using global sensitivity.
Use of piezoelectric foil for flow diagnostics
NASA Technical Reports Server (NTRS)
Carraway, Debra L.; Bertelrud, Arild
1989-01-01
A laboratory investigation was conducted to characterize two piezoelectric-film sensor configurations, a rigidly mounted sensor and a sensor mounted over an air cavity. The sensors are evaluated for sensitivity and frequency response, and methods to optimize data are presented. The cavity-mounted sensor exhibited a superior frequency response and was more sensitive to normal pressure fluctuations and less sensitive to vibrations through the structure. Both configurations were sensitive to large-scale structural vibrations. Flight-test data are shown for cavity-mounted sensors, illustrating practical aspects to consider when designing sensors for application in such harsh environments. The relation of the data to skin friction and maximum shear stress, transition detection, and turbulent viscous layers is derived through analysis of the flight data.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks
Kaltenbacher, Barbara; Hasenauer, Jan
2017-01-01
Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351
Hogan, Thomas J
2012-05-01
The objective was to review recent economic evaluations of influenza vaccination by injection in the US, assess their evidence, and conclude on their collective findings. The literature was searched for economic evaluations of influenza vaccination injection in healthy working adults in the US published since 1995. Ten evaluations described in nine papers were identified. These were synopsized and their results evaluated, the basic structure of all evaluations was ascertained, and sensitivity of outcomes to changes in parameter values were explored using a decision model. Areas to improve economic evaluations were noted. Eight of nine evaluations with credible economic outcomes were favourable to vaccination, representing a statistically significant result compared with a proportion of 50% that would be expected if vaccination and no vaccination were economically equivalent. Evaluations shared a basic structure, but differed considerably with respect to cost components, assumptions, methods, and parameter estimates. Sensitivity analysis indicated that changes in parameter values within the feasible range, individually or simultaneously, could reverse economic outcomes. Given stated misgivings, the methods of estimating influenza reduction ascribed to vaccination must be researched to confirm that they produce accurate and reliable estimates. Research is also needed to improve estimates of the costs per case of influenza illness and the costs of vaccination. Based on their assumptions, the reviewed papers collectively appear to support the economic benefits of influenza vaccination of healthy adults. Yet the underlying assumptions, methods and parameter estimates themselves warrant further research to confirm they are accurate, reliable and appropriate to economic evaluation purposes.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Paliwal, Shivani R; Paliwal, Rishi; Pal, Harish C; Saxena, Ajeet K; Sharma, Pradyumana R; Gupta, Prem N; Agrawal, Govind P; Vyas, Suresh P
2012-01-01
The present investigation reports the development of nanoengineered estrogen receptor (ER) targeted pH-sensitive liposome for the site-specific intracellular delivery of doxorubicin (DOX) for breast cancer therapy. Estrone, a bioligand, was anchored on the surface of pH-sensitive liposome for drug targeting to ERs. The estrone-anchored pH-sensitive liposomes (ES-pH-sensitive-SL) showed fusogenic potential at acidic pH (5.5). In vitro cytotoxicity studies carried out on ER-positive MCF-7 breast carcinoma cells revealed that ES-pH-sensitive-SL formulation was more cytotoxic than non-pH-sensitive targeted liposomes (ES-SL). The flow cytometry analysis confirmed significant enhanced uptake (p < 0.05) of ES-pH-sensitive-SL by MCF-7 cells. Intracellular delivery and nuclear localization of the DOX was confirmed by fluorescence microscopy. The mechanism for higher cytotoxicity shown by estrone-anchored pH-sensitive liposomal-DOX was elucidated using reactive oxygen species (ROS) determination. The in vivo biodistribution studies and antitumor activities of formulations were evaluated on tumor bearing female Balb/c mice followed by intravenous administration. The ES-pH-sensitive-SL efficiently suppressed the breast tumor growth in comparison to both ES-SL and free DOX. Serum enzyme activities such as LDH and CPK levels were assayed for the evaluation of DOX induced cardiotoxicity. The ES-pH-sensitive-SL accelerated the intracellular trafficking of encapsulated DOX, thus increasing the therapeutic efficacy. The findings support that estrone-anchored pH-sensitive liposomes could be one of the promising nanocarriers for the targeted intracellular delivery of anticancer agents to breast cancer with reduced systemic side effects.
Lima, Alexandre; López, Alejandra; van Genderen, Michel E; Hurtado, Francisco Javier; Angulo, Martin; Grignola, Juan C; Shono, Atsuko; van Bommel, Jasper
2015-09-01
This was a cross-sectional multicenter study to investigate the ability of physicians and nurses from three different countries to subjectively evaluate sublingual microcirculation images and thereby discriminate normal from abnormal sublingual microcirculation based on flow and density abnormalities. Forty-five physicians and 61 nurses (mean age, 36 ± 10 years; 44 males) from three different centers in The Netherlands (n = 61), Uruguay (n = 12), and Japan (n = 33) were asked to subjectively evaluate a sample of 15 microcirculation videos randomly selected from an experimental model of endotoxic shock in pigs. All videos were first analyzed offline using the A.V.A. software by an independent, experienced investigator and were categorized as good, bad, or very bad microcirculation based on the microvascular flow index, perfused capillary density, and proportion of perfused capillaries. Then, the videos were randomly assigned to the examiners, who were instructed to subjectively categorize each image as good, bad, or very bad. An interrater analysis was performed, and sensitivity and specificity tests were calculated to evaluate the proportion of A.V.A. score abnormalities that the examiners correctly identified. The κ statistics indicated moderate agreement in the evaluation of microcirculation abnormalities using three categories, i.e., good, bad, or very bad (κ = 0.48), and substantial agreement using two categories, i.e., normal (good) and abnormal (bad or very bad) (κ = 0.66). There was no significant difference between the κ three and κ two statistics. We found that the examiner's subjective evaluations had good diagnostic performance and were highly sensitive (84%; 95% confidence interval, 81%-86%) and specific (87%; 95% confidence interval, 84%-90%) for sublingual microcirculatory abnormalities as assessed using the A.V.A. software. The subjective evaluations of sublingual microcirculation by physicians and nurses agreed well with a conventional offline analysis and were highly sensitive and specific for sublingual microcirculatory abnormalities.
Ferko, Nicole; Ferrante, Giuseppe; Hasegawa, James T; Schikorr, Tanya; Soleas, Ireena M; Hernandez, John B; Sabaté, Manel; Kaiser, Christoph; Brugaletta, Salvatore; de la Torre Hernandez, Jose Maria; Galatius, Soeren; Cequier, Angel; Eberli, Franz; de Belder, Adam; Serruys, Patrick W; Valgimigli, Marco
2017-05-01
Second-generation drug eluting stents (DES) may reduce costs and improve clinical outcomes compared to first-generation DES with improved cost-effectiveness when compared to bare metal stents (BMS). We aimed to conduct an economic evaluation of a cobalt-chromium everolimus eluting stent (Co-Cr EES) compared with BMS in percutaneous coronary intervention (PCI). To conduct a cost-effectiveness analysis (CEA) of a cobalt-chromium everolimus eluting stent (Co-Cr EES) versus BMS in PCI. A Markov state transition model with a 2-year time horizon was applied from a US Medicare setting with patients undergoing PCI with Co-Cr EES or BMS. Baseline characteristics, treatment effects, and safety measures were taken from a patient level meta-analysis of 5 RCTs (n = 4,896). The base-case analysis evaluated stent-related outcomes; a secondary analysis considered the broader set of outcomes reported in the meta-analysis. The base-case and secondary analyses reported an additional 0.018 and 0.013 quality-adjusted life years (QALYs) and cost savings of $236 and $288, respectively with Co-Cr EES versus BMS. Results were robust to sensitivity analyses and were most sensitive to the price of clopidogrel. In the probabilistic sensitivity analysis, Co-Cr EES was associated with a greater than 99% chance of being cost saving or cost effective (at a cost per QALY threshold of $50,000) versus BMS. Using data from a recent patient level meta-analysis and contemporary cost data, this analysis found that PCI with Co-Cr EES is more effective and less costly than PCI with BMS. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc.
Histopathological Identification of Colon Cancer with Microsatellite Instability
Alexander, Julian; Watanabe, Toshiaki; Wu, Tsung-Teh; Rashid, Asif; Li, Shuan; Hamilton, Stanley R.
2001-01-01
Cancer with high levels of microsatellite instability (MSI-H) is the hallmark of hereditary nonpolyposis colorectal cancer syndrome, and MSI-H occurs in ∼15% of sporadic colorectal carcinomas that have improved prognosis. We examined the utility of histopathology for the identification of MSI-H cancers by evaluating the features of 323 sporadic carcinomas using specified criteria and comparing the results to MSI-H status. Coded hematoxylin and eosin sections were evaluated for tumor features (signet ring cells; mucinous histology; cribriforming, poor differentiation, and medullary-type pattern; sponge-like mucinous growth; pushing invasive margin) and features of host immune response (Crohn’s-like lymphoid reaction, intratumoral lymphocytic infiltrate, and intraepithelial T cells by immunohistochemistry for CD3 with morphometry). Interobserver variation among five pathologists was determined. Subjective interpretation of histopathology as an indication for MSI testing was recorded. We found that medullary carcinoma, intraepithelial lymphocytosis, and poor differentiation were the best discriminators between MSI-H and microsatellite-stable cancers (odds ratio: 37.8, 9.8, and 4.0, respectively; P = 0.000003 to <0.000001) with high specificity (99 to 87%). The sensitivities, however, were very low (14 to 38%), and interobserver agreement was good only for evaluation of poor differentiation (kappa, 0.69). Mucinous histopathological type and presence of signet ring cells had low odds ratios of 3.3 and 2.7 (P = 0.005 and P = 0.02) with specificities of 95% but sensitivities of only 15 and 13%. Subjective interpretation of the overall histopathology as suggesting MSI-H performed better than any individual feature; the odds ratio was 7.5 (P < 0.000001) with sensitivity of 49%, specificity of 89%, and moderate interobserver agreement (kappa, 0.52). Forty intraepithelial CD3-positive lymphocytes/0.94 mm2, as established by receiver operating characteristic curve analysis, resulted in an odds ratio of 6.0 (P < 0.000001) with sensitivity of 75% and specificity of 67%. Our findings indicate that histopathological evaluation can be used to prioritize sporadic colon cancers for MSI studies, but morphological prediction of MSI-H has low sensitivity, requiring molecular analysis for therapeutic decisions. PMID:11159189
Rui, Jing; Runge, M Brett; Spinner, Robert J; Yaszemski, Michael J; Windebank, Anthony J; Wang, Huan
2014-10-01
Video-assisted gait kinetics analysis has been a sensitive method to assess rat sciatic nerve function after injury and repair. However, in conduit repair of sciatic nerve defects, previously reported kinematic measurements failed to be a sensitive indicator because of the inferior recovery and inevitable joint contracture. This study aimed to explore the role of physiotherapy in mitigating joint contracture and to seek motion analysis indices that can sensitively reflect motor function. Data were collected from 26 rats that underwent sciatic nerve transection and conduit repair. Regular postoperative physiotherapy was applied. Parameters regarding step length, phase duration, and ankle angle were acquired and analyzed from video recording of gait kinetics preoperatively and at regular postoperative intervals. Stride length ratio (step length of uninjured foot/step length of injured foot), percent swing of the normal paw (percentage of the total stride duration when the uninjured paw is in the air), propulsion angle (toe-off angle subtracted by midstance angle), and clearance angle (ankle angle change from toe off to midswing) decreased postoperatively comparing with baseline values. The gradual recovery of these measurements had a strong correlation with the post-nerve repair time course. Ankle joint contracture persisted despite rigorous physiotherapy. Parameters acquired from a 2-dimensional motion analysis system, that is, stride length ratio, percent swing of the normal paw, propulsion angle, and clearance angle, could sensitively reflect nerve function impairment and recovery in the rat sciatic nerve conduit repair model despite the existence of joint contractures.
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
El-Osta, Hazem; Jani, Pushan; Mansour, Ali; Rascoe, Philip; Jafri, Syed
2018-04-23
An accurate assessment of the mediastinal lymph nodes status is essential in the staging and treatment planning of potentially resectable non-small cell lung cancer (NSCLC). We performed this meta-analysis to evaluate the role of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA) in detecting occult mediastinal disease in NSCLC with no radiologic mediastinal involvement. The PubMed, Embase, and Cochrane libraries were searched for studies describing the role of EBUS-TBNA in lung cancer patients with radiologically negative mediastinum. The individual and pooled sensitivity, prevalence, negative predictive value (NPV), and diagnostic odds ratio (DOR) were calculated using the random effects model. Metaregression analysis, heterogeneity, and publication bias were also assessed. A total of 13 studies that met the inclusion criteria were included in the meta-analysis. The pooled effect size of the different diagnostic parameters were estimated as follows: prevalence, 12.8% (95% CI, 10.4%-15.7%); sensitivity, 49.5% (95% confidence interval [CI], 36.4%-62.6%); NPV, 93.0% (95% CI, 90.3%-95.0%); and log DOR, 5.069 (95% CI, 4.212-5.925). Significant heterogeneity was noticeable for the sensitivity, disease prevalence, and NPV, but not observed for log DOR. Publication bias was detected for sensitivity, NPV and log DOR but not for prevalence. Bivariate meta-regression analysis showed no significant association between the pooled calculated parameters and the type of anesthesia, imaging utilized to define negative mediastinum, rapid on-site test usage, and presence of bias by QUADAS-2 tool. Interestingly, we observed a greater sensitivity, NPV and log DOR for studies published prior to 2010, and for prospective multicenter studies. Among NSCLC patients with a radiologically normal mediastinum, the prevalence of mediastinal disease is 12.8% and the sensitivity of EBUS-TBNA is 49.5%. Despite the low sensitivity, the resulting NPV of 93.0% for EBUS-TBNA suggests that mediastinal metastasis is uncommon in such patients.
de Ruiter, C. M.; van der Veer, C.; Leeflang, M. M. G.; Deborggraeve, S.; Lucas, C.
2014-01-01
Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. PMID:24829226
Liu, Ting; He, Xiang-ge
2006-05-01
To evaluate the overall diagnostic capabilities of frequency-doubling technology (FDT) in patients of primary glaucoma, with standard automated perimetry (SAP) and/or optic disc appearance as the gold standard. A comprehensive electric search in MEDLINE, EMBASE, Cochrane Library, BIOSIS, Previews, HMIC, IPA, OVID, CNKI, CBMdisc, VIP information, CMCC, CCPD, SSreader and 21dmedia and a manual search in related textbooks, journals, congress articles and their references were performed to identify relevant English and Chinese language articles. Criteria for adaptability were established according to validity criteria for diagnostic research published by the Cochrane Methods Group on Screening and Diagnostic Tests. Quality of the included articles was assessed and relevant materials were extracted for studying. Statistical analysis was performed with Meta Test version 0.6 software. Heterogeneity of the included articles was tested, which was used to select appropriate effect model to calculate pooled weighted sensitivity and specificity. Summary Receiver Operating Characteristic (SROC) curve was established and the area under the curve (AUC) was calculated. Finally, sensitivity analysis was performed. Fifteen English articles (21 studies) of 206 retrieved articles were included in the present study, with a total of 3172 patients. The reported sensitivity of FDT ranged from 0.51 to 1.00, and specificity from 0.58 to 1.00. The pooled weighted sensitivity and specificity for FDT with 95% confidence intervals (95% CI) after correction for standard error were 0.86 (0.80 - 0.90), 0.87 (0.81 - 0.91), respectively. The AUC of SROC was 93.01%. Sensitivity analysis demonstrated no disproportionate influences of individual study. The included articles are of good quality and FDT can be a highly efficient diagnostic test for primary glaucoma based on Meta-analysis. However, a high quality perspective study is still required for further analysis.
Mu, Xuetao; Wang, Hong; Ma, Qiaozhi; Wu, Chunnan; Ma, Lin
2014-06-01
The objective of this study was to determine the diagnostic accuracy of contrast-enhanced magnetic resonance angiography (MRA) when used in the preoperative evaluation of hepatic vascular anatomy in living liver donors. A computer-assisted literature searching of EMBASE, PubMed (MEDLINE), and the Cochrane library databases was conducted to identify potentially relevant articles which primarily examined the utility of contrast-enhanced MRA in the preoperative evaluation of hepatic vascular anatomy in living liver donors. We used the Q statistic of chi-squared value test and inconsistency index (I-squared, I(2)) to estimate the heterogeneity of the data extracted from all selected studies. Meta-Disc software (version 1.4) (ftp://ftp.hrc.es/pub/programas/metadisc/Metadisc_update.htm) was used to perform our analysis. Eight studies were included in the present meta-analysis. A total of 289 living liver donor candidates and 198 patients who underwent liver harvesting were included in the present study. The pooled sensitivities of hepatic artery (HA), portal vein (PV), and hepatic vein (HV) in this meta-analysis were 0.84, 0.97, and 0.94, respectively. The pooled specificities of HA, PV, and HV were 1.00, 1.00, and 1.00, respectively. The pooled diagnostic odds ratios of HA, PV, and HV were 127.28, 302.80, and 256.59, respectively. The area under the summary receiver-operating characteristic curves of HA, PV, and HV were 0.9917, 0.9960, and 0.9813, respectively. The high sensitivity and specificity demonstrated in this meta-analysis suggest that contrast-enhanced MRA was a promising test for the preoperative evaluation of hepatic vascular anatomy in living liver donors. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.
Alexandrino, Larissa Dias; Alencar, Cristiane de Melo; Silveira, Ana Daniela Silva da; Alves, Eliane Bemerguy; Silva, Cecy Martins
2017-01-01
This randomized, controlled, double-blind clinical study evaluated the effect of calcium sodium phosphosilicate (NovaMin) and casein phosphopeptide-amorphous calcium phosphate with fluoride (CPP-ACPF) on the prevention of post-operative sensitivity and on the effects of clinical bleaching treatment. Sixty volunteers were selected according to inclusion and exclusion criteria and were randomly assigned into three groups (n=20): CG (control group) patients, who were treated with 35% hydrogen peroxide; NOVAG (NovaMin group) patients, who were treated with 35% hydrogen peroxide followed by the application of NovaMin; and CPPG (CPP group) patients, who were treated with 35% hydrogen peroxide followed by the application of CPP-ACPF. Both bioactive agents were applied for five minutes. An evaporative stimulus associated with a modified visual scale was used to analyze sensitivity 24 hours after each bleaching session. The color evaluation was performed on the maxillary central incisors using a spectrophotometer. Associations between the intervention group, bleaching session, and reported sensitivity were tested using Chi-square partitioning. Color change values (ΔE) were analyzed using analysis of variance (ANOVA). The significance level used for both tests was 5%. In the intragroup assessment, the Friedman test showed that only the CPP-ACPF group showed no statistically significant difference (p<0.05) between baseline and first bleaching session. In the intergroup assessment, the Kruskal-Wallis test showed that the CPPG had less postoperative sensitivity after the first session, when compared to the other groups (p<0.05). Color change analysis (ΔE) showed a significant difference between the means obtained in the different bleaching sessions in all groups (p<0.05). This study showed that the combination of CPP-ACPF with 35% hydrogen peroxide significantly reduced post-operative sensitivity in the first session, compared with the other evaluated treatments. The association of CPP-ACPF and NovaMin did not affect the color change induced by tooth bleaching.
SU-E-T-429: Uncertainties of Cell Surviving Fractions Derived From Tumor-Volume Variation Curves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A
2014-06-01
Purpose: To evaluate uncertainties of cell surviving fraction reconstructed from tumor-volume variation curves during radiation therapy using sensitivity analysis based on linear perturbation theory. Methods: The time dependent tumor-volume functions V(t) have been calculated using a twolevel cell population model which is based on the separation of entire tumor cell population in two subpopulations: oxygenated viable and lethally damaged cells. The sensitivity function is defined as S(t)=[δV(t)/V(t)]/[δx/x] where δV(t)/V(t) is the time dependent relative variation of the volume V(t) and δx/x is the relative variation of the radiobiological parameter x. The sensitivity analysis was performed using direct perturbation method wheremore » the radiobiological parameter x was changed by a certain error and the tumor-volume was recalculated to evaluate the corresponding tumor-volume variation. Tumor volume variation curves and sensitivity functions have been computed for different values of cell surviving fractions from the practically important interval S{sub 2}=0.1-0.7 using the two-level cell population model. Results: The sensitivity functions of tumor-volume to cell surviving fractions achieved a relatively large value of 2.7 for S{sub 2}=0.7 and then approached zero as S{sub 2} is approaching zero Assuming a systematic error of 3-4% we obtain that the relative error in S{sub 2} is less that 20% in the range S2=0.4-0.7. This Resultis important because the large values of S{sub 2} are associated with poor treatment outcome should be measured with relatively small uncertainties. For the very small values of S2<0.3, the relative error can be larger than 20%; however, the absolute error does not increase significantly. Conclusion: Tumor-volume curves measured during radiotherapy can be used for evaluation of cell surviving fractions usually observed in radiation therapy with conventional fractionation.« less
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Validation of a Low-Cost Paper-Based Screening Test for Sickle Cell Anemia
Piety, Nathaniel Z.; Yang, Xiaoxi; Kanter, Julie; Vignes, Seth M.; George, Alex; Shevkoplyas, Sergey S.
2016-01-01
Background The high childhood mortality and life-long complications associated with sickle cell anemia (SCA) in developing countries could be significantly reduced with effective prophylaxis and education if SCA is diagnosed early in life. However, conventional laboratory methods used for diagnosing SCA remain prohibitively expensive and impractical in this setting. This study describes the clinical validation of a low-cost paper-based test for SCA that can accurately identify sickle trait carriers (HbAS) and individuals with SCA (HbSS) among adults and children over 1 year of age. Methods and Findings In a population of healthy volunteers and SCA patients in the United States (n = 55) the test identified individuals whose blood contained any HbS (HbAS and HbSS) with 100% sensitivity and 100% specificity for both visual evaluation and automated analysis, and detected SCA (HbSS) with 93% sensitivity and 94% specificity for visual evaluation and 100% sensitivity and 97% specificity for automated analysis. In a population of post-partum women (with a previously unknown SCA status) at a primary obstetric hospital in Cabinda, Angola (n = 226) the test identified sickle cell trait carriers with 94% sensitivity and 97% specificity using visual evaluation (none of the women had SCA). Notably, our test permits instrument- and electricity-free visual diagnostics, requires minimal training to be performed, can be completed within 30 minutes, and costs about $0.07 in test-specific consumable materials. Conclusions Our results validate the paper-based SCA test as a useful low-cost tool for screening adults and children for sickle trait and disease and demonstrate its practicality in resource-limited clinical settings. PMID:26735691
NASA Astrophysics Data System (ADS)
Fares, A.; Cheng, C. L.; Dogan, A.
2006-12-01
Impaired water quality caused by agriculture, urbanization, and spread of invasive species has been identified as a major factor in the degradation of coastal ecosystems in the tropics. Watershed-scale nonpoint source pollution models facilitate in evaluating effective management practices to alleviate the negative impacts of different land-use changes. The Non-Point Source Pollution and Erosion Comparison Tool (N-SPECT) is a newly released watershed model that was not previously tested under tropical conditions. The two objectives of this study were to: i) calibrate and validate N-SPECT for the Hanalei Watershed of the Hawai`ian island of Kaua`i; ii) evaluate the performance of N-SPECT under tropical conditions using the sensitivity analysis approach. Hanalei watershed has one of the wettest points on earth, Mt. Waialeale with an average annual rainfall of 11,000 mm. This rainfall decreases to 2,000 mm at the outlet of the watershed near the coast. Number of rain days is one of the major input parameters that influences N-SPECT's simulation results. This parameter was used to account for plant canopy interception losses. The watershed was divided into sub- basins to accurately distribute the number of rain days throughout the watershed. Total runoff volume predicted by the model compared well with measured data. The model underestimated measured runoff by 1% for calibration period and 5% for validation period due to higher intensity precipitation in the validation period. Sensitivity analysis revealed that the model was most sensitive to the number of rain days, followed by canopy interception, and least sensitive to the number of sub-basins. The sediment and water quality portion of the model is currently being evaluated.
Chemin, K; Rezende, M; Loguercio, A D; Reis, A; Kossatz, S
To evaluate the risk for and intensity of tooth sensitivity and color change of at-home dental bleaching with 4% and 10% hydrogen peroxide (HP). For this study, 78 patients were selected according to the inclusion and exclusion criteria and randomized into two groups: HP 4 (White Class 4%, FGM) and HP 10 (White Class 10%, FGM). In both groups, the at-home bleaching was performed for a period of 30 minutes twice a day for two weeks. The color was assessed by Vita Classical, Vita Bleachedguide 3D-MASTER and spectrophotometer Vita Easyshade (Vita Zahnfabrik) at baseline, during bleaching (first and second weeks) and after bleaching (one month). Patients recorded their tooth sensitivity using a numeric rating scale (0-4) and visual analog scale (0-10). Data from color change (DeltaE data) was submitted to two-way analysis of variance. The color change data in Delta SGU from the two shade guide units were compared with the Mann Whitney test. The risk of tooth sensitivity was evaluated by χ 2 test and the intensity of tooth sensitivity from both scales was evaluated by a Mann-Whitney test (α=0.05). The absolute risk and intensity of tooth sensitivity was higher in the group that used HP 10 than the one that used HP 4. Data from change in the number of shade guide units and color variation after one month of bleaching for both groups showed significant whitening, with no difference between groups. At-home bleaching is effective with 4% and 10% HP concentrations, but 10% HP increased the absolute risk and intensity of tooth sensitivity during at-home bleaching.
An orientation analysis method for protein immobilized on quantum dot particles
NASA Astrophysics Data System (ADS)
Aoyagi, Satoka; Inoue, Masae
2009-11-01
The evaluation of orientation of biomolecules immobilized on nanodevices is crucial for the development of high performance devices. Such analysis requires ultra high sensitivity so as to be able to detect less than one molecular layer on a device. Time-of-flight secondary ion mass spectrometry (TOF-SIMS) has sufficient sensitivity to evaluate the uppermost surface structure of a single molecular layer. The objective of this study is to develop an orientation analysis method for proteins immobilized on nanomaterials such as quantum dot particles, and to evaluate the orientation of streptavidin immobilized on quantum dot particles by means of TOF-SIMS. In order to detect fragment ions specific to the protein surface, a monoatomic primary ion source (Ga +) and a cluster ion source (Au 3+) were employed. Streptavidin-immobilized quantum dot particles were immobilized on aminosilanized ITO glass plates at amino groups by covalent bonding. The reference samples streptavidin directly immobilized on ITO plates were also prepared. All samples were dried with a freeze dryer before TOF-SIMS measurement. The positive secondary ion spectra of each sample were obtained using TOF-SIMS with Ga + and Au 3+, respectively, and then they were compared so as to characterize each sample and detect the surface structure of the streptavidin immobilized with the biotin-immobilized quantum dots. The chemical structures of the upper surface of the streptavidin molecules immobilized on the quantum dot particles were evaluated with TOF-SIMS spectra analysis. The indicated surface side of the streptavidin molecules immobilized on the quantum dots includes the biotin binding site.
Yoo, Doo Han; Lee, Jae Shin
2016-07-01
[Purpose] This study examined the clinical usefulness of the clock drawing test applying Rasch analysis for predicting the level of cognitive impairment. [Subjects and Methods] A total of 187 stroke patients with cognitive impairment were enrolled in this study. The 187 patients were evaluated by the clock drawing test developed through Rasch analysis along with the mini-mental state examination of cognitive evaluation tool. An analysis of the variance was performed to examine the significance of the mini-mental state examination and the clock drawing test according to the general characteristics of the subjects. Receiver operating characteristic analysis was performed to determine the cutoff point for cognitive impairment and to calculate the sensitivity and specificity values. [Results] The results of comparison of the clock drawing test with the mini-mental state showed significant differences in according to gender, age, education, and affected side. A total CDT of 10.5, which was selected as the cutoff point to identify cognitive impairement, showed a sensitivity, specificity, Youden index, positive predictive, and negative predicive values of 86.4%, 91.5%, 0.8, 95%, and 88.2%. [Conclusion] The clock drawing test is believed to be useful in assessments and interventions based on its excellent ability to identify cognitive disorders.
Accelerometric gait analysis for use in hospital outpatients.
Auvinet, B; Chaleil, D; Barrey, E
1999-01-01
To provide clinicians with a quantitative human gait analysis tool suitable for routine use. We evaluated the reproducibility, sensitivity, and specificity of gait analysis based on measurements of acceleration at a point near the center of gravity of the body. Two accelerometers held over the middle of the low back by a semi-elastic belt were used to record craniocaudal and side-to-side accelerations at a frequency of 50 Hz. Subjects were asked to walk at their normal speed to the end of a straight 40 meter-long hospital corridor and back. A 20-second period of stabilized walking was used to calculate cycle frequency, stride symmetry, and stride regularity. Symmetry and regularity were each derived from an auto-correlation coefficient; to convert their distribution from nonnormal to normal, Fisher's Z transformation was applied to the auto-coefficients for these two variables. Intraobserver reproducibility was evaluated by asking the same observer to test 16 controls on three separate occasions at two-day intervals and interobserver reproducibility by asking four different observers to each test four controls (Latin square). Specificity and sensitivity were determined by testing 139 controls and 63 patients. The 139 controls (70 women and 69 men) were divided into five age groups (third through seventh decades of life). The 63 patients had a noninflammatory musculoskeletal condition predominating on one side. ROC curves were used to determine the best cutoffs for separating normal from abnormal values. Neither intra- nor interobserver variability was significant (P > 0.05). Cycle frequency was significantly higher in female than in male controls (1.05 +/- 0.06 versus 0.98 +/- 0.05 cycles/s; P < 0.001). Neither symmetry nor regularity were influenced by gender in the controls; both variables were also unaffected by age, although nonsignificant decreases were found in the 61 to 70-year age group, which included only nine subjects. In the ROC curve analysis, the area under the curve was high for all three variables (frequency, 0.81 +/- 0.04; symmetry, 0.85 +/- 0.03; and regularity, 0.88 +/- 0.03), establishing that there was a good compromise between sensitivity and specificity. Our gait analysis method offers satisfactory reproducibility and is sufficiently sensitive and specific to be used by clinicians in the quantitative evaluation of gait abnormalities.
Are quantitative sensitivity analysis methods always reliable?
NASA Astrophysics Data System (ADS)
Huang, X.
2016-12-01
Physical parameterizations developed to represent subgrid-scale physical processes include various uncertain parameters, leading to large uncertainties in today's Earth System Models (ESMs). Sensitivity Analysis (SA) is an efficient approach to quantitatively determine how the uncertainty of the evaluation metric can be apportioned to each parameter. Also, SA can identify the most influential parameters, as a result to reduce the high dimensional parametric space. In previous studies, some SA-based approaches, such as Sobol' and Fourier amplitude sensitivity testing (FAST), divide the parameters into sensitive and insensitive groups respectively. The first one is reserved but the other is eliminated for certain scientific study. However, these approaches ignore the disappearance of the interactive effects between the reserved parameters and the eliminated ones, which are also part of the total sensitive indices. Therefore, the wrong sensitive parameters might be identified by these traditional SA approaches and tools. In this study, we propose a dynamic global sensitivity analysis method (DGSAM), which iteratively removes the least important parameter until there are only two parameters left. We use the CLM-CASA, a global terrestrial model, as an example to verify our findings with different sample sizes ranging from 7000 to 280000. The result shows DGSAM has abilities to identify more influential parameters, which is confirmed by parameter calibration experiments using four popular optimization methods. For example, optimization using Top3 parameters filtered by DGSAM could achieve substantial improvement against Sobol' by 10%. Furthermore, the current computational cost for calibration has been reduced to 1/6 of the original one. In future, it is necessary to explore alternative SA methods emphasizing parameter interactions.
Barboni, Mirella Telles Salgueiro; Szepessy, Zsuzsanna; Ventura, Dora Fix; Németh, János
2018-04-01
To establish fluctuation limits, it was considered that not only overall macular sensitivity but also fluctuations of individual test points in the macula might have clinical value. Three repeated measurements of microperimetry were performed using the Standard Expert test of Macular Integrity Assessment (MAIA) in healthy subjects ( N = 12, age = 23.8 ± 1.5 years old) and in patients with age-related macular degeneration (AMD) ( N = 11, age = 68.5 ± 7.4 years old). A total of 37 macular points arranged in four concentric rings and in four quadrants were analyzed individually and in groups. The data show low fluctuation of macular sensitivity of individual test points in healthy subjects (average = 1.38 ± 0.28 dB) and AMD patients (average = 2.12 ± 0.60 dB). Lower sensitivity points are more related to higher fluctuation than to the distance from the central point. Fixation stability showed no effect on the sensitivity fluctuation. The 95th percentile of the standard deviations of healthy subjects was, on average, 2.7 dB, ranging from 1.2 to 4 dB, depending on the point tested. Point analysis and regional analysis might be considered prior to evaluating macular sensitivity fluctuation in order to distinguish between normal variation and a clinical change. S tatistical methods were used to compare repeated microperimetry measurements and to establish fluctuation limits of the macular sensitivity. This analysis could add information regarding the integrity of different macular areas and provide new insights into fixation points prior to the biofeedback fixation training.
A Model for Analyzing Disability Policy
ERIC Educational Resources Information Center
Turnbull, Rud; Stowe, Matthew J.
2017-01-01
This article describes a 12-step model that can be used for policy analysis. The model encompasses policy development, implementation, and evaluation; takes into account structural foundations of policy; addresses both legal formalism and legal realism; demonstrates contextual sensitivity; and addresses application issues and different…
Sensitivity analysis for simulating pesticide impacts on honey bee colonies
Background/Question/Methods Regulatory agencies assess risks to honey bees from pesticides through a tiered process that includes predictive modeling with empirical toxicity and chemical data of pesticides as a line of evidence. We evaluate the Varroapop colony model, proposed by...
Fault-Sensitivity and Wear-Out Analysis of VLSI Systems.
1995-06-01
DESCRIPTION MIXED-MODE HIERARCIAIFAULT DESCRIPTION FAULT SIMULATION TYPE OF FAULT TRANSIENT/STUCK-AT LOCATION/TIME * _AUTOMATIC FAULT INJECTION TRACE...4219-4224, December 1985. [15] J. Sosnowski, "Evaluation of transient hazards in microprocessor controll - ers," Digest, FTCS-16, The Sixteenth
Sensory-motor responses to mechanical stimulation of the esophagus after sensitization with acid.
Drewes, Asbjørn-Mohr; Reddy, Hariprasad; Staahl, Camilla; Pedersen, Jan; Funch-Jensen, Peter; Arendt-Nielsen, Lars; Gregersen, Hans
2005-07-28
Sensitization most likely plays an important role in chronic pain disorders, and such sensitization can be mimicked by experimental acid perfusion of the esophagus. The current study systematically investigated the sensory and motor responses of the esophagus to controlled mechanical stimuli before and after sensitization. Thirty healthy subjects were included. Distension of the distal esophagus with a balloon was performed before and after perfusion with 0.1 mol/L hydrochloric acid for 30 min. An impedance planimetry system was used to measure cross-sectional area, volume, pressure, and tension during the distensions. A new model allowed evaluation of the phasic contractions by the tension during contractions as a function of the initial muscle length before the contraction (comparable to the Frank-Starling law for the heart). Length-tension diagrams were used to evaluate the muscle tone before and after relaxation of the smooth muscle with butylscopolamine. The sensitization resulted in allodynia and hyperalgesia to the distension volumes, and the degree of sensitization was related to the infused volume of acid. Furthermore, a nearly 50% increase in the evoked referred pain was seen after sensitization. The mechanical analysis demonstrated hyper-reactivity of the esophagus following acid perfusion, with an increased number and force of the phasic contractions, but the muscle tone did not change. Acid perfusion of the esophagus sensitizes the sensory pathways and facilitates secondary contractions. The new model can be used to study abnormal sensory-motor mechanisms in visceral organs.
Sensory-motor responses to mechanical stimulation of the esophagus after sensitization with acid
Drewes, Asbjorn Mohr; Reddy, Hariprasad; Staahl, Camilla; Pedersen, Jan; Funch-Jensen, Peter; Arendt-Nielsen, Lars; Gregersen, Hans
2005-01-01
AIM: Sensitization most likely plays an important role in chronic pain disorders, and such sensitization can be mimicked by experimental acid perfusion of the esophagus. The current study systematically investigated the sensory and motor responses of the esophagus to controlled mechanical stimuli before and after sensitization. METHODS: Thirty healthy subjects were included. Distension of the distal esophagus with a balloon was performed before and after perfusion with 0.1 mol/L hydrochloric acid for 30 min. An impedance planimetry system was used to measure cross-sectional area, volume, pressure, and tension during the distensions. A new model allowed evaluation of the phasic contractions by the tension during contractions as a function of the initial muscle length before the contraction (comparable to the Frank-Starling law for the heart). Length-tension diagrams were used to evaluate the muscle tone before and after relaxation of the smooth muscle with butylscopolamine. RESULTS: The sensitization resulted in allodynia and hyperalgesia to the distension volumes, and the degree of sensitization was related to the infused volume of acid. Furthermore, a nearly 50% increase in the evoked referred pain was seen after sensitization. The mechanical analysis demonstrated hyper-reactivity of the esophagus following acid perfusion, with an increased number and force of the phasic contractions, but the muscle tone did not change. CONCLUSION: Acid perfusion of the esophagus sensitizes the sensory pathways and facilitates secondary contractions. The new model can be used to study abnormal sensory-motor mechanisms in visceral organs. PMID:16038036
Evaluation of Microbial Load in Oropharyngeal Mucosa from Tannery Workers
Castellanos-Arévalo, Diana C.; Castellanos-Arévalo, Andrea P.; Camarena-Pozos, David A.; Colli-Mull, Juan G.; Maldonado-Vega, María
2014-01-01
Background Animal skin provides an ideal medium for the propagation of microorganisms and it is used like raw material in the tannery and footware industry. The aim of this study was to evaluate and identify the microbial load in oropharyngeal mucosa of tannery employees. Methods The health risk was estimated based on the identification of microorganisms found in the oropharyngeal mucosa samples. The study was conducted in a tanners group and a control group. Samples were taken from oropharyngeal mucosa and inoculated on plates with selective medium. In the samples, bacteria were identified by 16S ribosomal DNA analysis and the yeasts through a presumptive method. In addition, the sensitivity of these microorganisms to antibiotics/antifungals was evaluated. Results The identified bacteria belonged to the families Enterobacteriaceae, Pseudomonadaceae, Neisseriaceae, Alcaligenaceae, Moraxellaceae, and Xanthomonadaceae, of which some species are considered as pathogenic or opportunistic microorganisms; these bacteria were not present in the control group. Forty-two percent of bacteria identified in the tanners group are correlated with respiratory diseases. Yeasts were also identified, including the following species: Candida glabrata, Candida tropicalis, Candida albicans, and Candida krusei. Regarding the sensitivity test of bacteria identified in the tanners group, 90% showed sensitivity to piperacillin/tazobactam, 87% showed sensitivity to ticarcillin/clavulanic acid, 74% showed sensitivity to ampicillin/sulbactam, and 58% showed sensitivity to amoxicillin/clavulanic acid. Conclusion Several of the bacteria and yeast identified in the oropharyngeal mucosa of tanners have been correlated with infections in humans and have already been reported as airborne microorganisms in this working environment, representing a health risk for workers. PMID:25830072
Macera, Annalisa; Lario, Chiara; Petracchini, Massimo; Gallo, Teresa; Regge, Daniele; Floriani, Irene; Ribero, Dario; Capussotti, Lorenzo; Cirillo, Stefano
2013-03-01
To compare the diagnostic accuracy and sensitivity of Gd-EOB-DTPA MRI and diffusion-weighted (DWI) imaging alone and in combination for detecting colorectal liver metastases in patients who had undergone preoperative chemotherapy. Thirty-two consecutive patients with a total of 166 liver lesions were retrospectively enrolled. Of the lesions, 144 (86.8 %) were metastatic at pathology. Three image sets (1, Gd-EOB-DTPA; 2, DWI; 3, combined Gd-EOB-DTPA and DWI) were independently reviewed by two observers. Statistical analysis was performed on a per-lesion basis. Evaluation of image set 1 correctly identified 127/166 lesions (accuracy 76.5 %; 95 % CI 69.3-82.7) and 106/144 metastases (sensitivity 73.6 %, 95 % CI 65.6-80.6). Evaluation of image set 2 correctly identified 108/166 (accuracy 65.1 %, 95 % CI 57.3-72.3) and 87/144 metastases (sensitivity of 60.4 %, 95 % CI 51.9-68.5). Evaluation of image set 3 correctly identified 148/166 (accuracy 89.2 %, 95 % CI 83.4-93.4) and 131/144 metastases (sensitivity 91 %, 95 % CI 85.1-95.1). Differences were statistically significant (P < 0.001). Notably, similar results were obtained analysing only small lesions (<1 cm). The combination of DWI with Gd-EOB-DTPA-enhanced MRI imaging significantly increases the diagnostic accuracy and sensitivity in patients with colorectal liver metastases treated with preoperative chemotherapy, and it is particularly effective in the detection of small lesions.
NASA Technical Reports Server (NTRS)
Lockwood, H. E.
1975-01-01
A color film with a sensitivity and color balance equal to SO-368, Kodak MS Ektachrome (Estar thin base) was required for use on the Apollo-Soyuz test project (ASTP). A Wratten 2A filter was required for use with the film to reduce short wavelength effects which frequently produce a blue color balance in aerial photographs. The background regarding a special emulsion which was produced with a 2A filter equivalent as an integral part of an SO-368 film manufactured by Eastman Kodak, the cost for production of the special film, and the results of a series of tests made within PTD to certify the film for ASTP use are documented. The tests conducted and documented were physical inspection, process compatibility, effective sensitivity, color balance, cross section analysis, resolution, spectral sensitivity, consistency of results, and picture sample analysis.
Bajer, P.G.; Wildhaber, M.L.
2007-01-01
Demographic models for the shovelnose (Scaphirhynchus platorynchus) and pallid (S. albus) sturgeons in the Lower Missouri River were developed to conduct sensitivity analyses for both populations. Potential effects of increased fishing mortality on the shovelnose sturgeon were also evaluated. Populations of shovelnose and pallid sturgeon were most sensitive to age-0 mortality rates as well as mortality rates of juveniles and young adults. Overall, fecundity was a less sensitive parameter. However, increased fecundity effectively balanced higher mortality among sensitive age classes in both populations. Management that increases population-level fecundity and improves survival of age-0, juveniles, and young adults should most effectively benefit both populations. Evaluation of reproductive values indicated that populations of pallid sturgeon dominated by ages ≥35 could rapidly lose their potential for growth, particularly if recruitment remains low. Under the initial parameter values portraying current conditions the population of shovelnose sturgeon was predicted to decline by 1.65% annually, causing the commercial yield to also decline. Modeling indicated that the commercial yield could increase substantially if exploitation of females in ages ≤12 was highly restricted.
French, Michael T; Salomé, Helena J; Sindelar, Jody L; McLellan, A Thomas
2002-04-01
To provide detailed methodological guidelines for using the Drug Abuse Treatment Cost Analysis Program (DATCAP) and Addiction Severity Index (ASI) in a benefit-cost analysis of addiction treatment. A representative benefit-cost analysis of three outpatient programs was conducted to demonstrate the feasibility and value of the methodological guidelines. Procedures are outlined for using resource use and cost data collected with the DATCAP. Techniques are described for converting outcome measures from the ASI to economic (dollar) benefits of treatment. Finally, principles are advanced for conducting a benefit-cost analysis and a sensitivity analysis of the estimates. The DATCAP was administered at three outpatient drug-free programs in Philadelphia, PA, for 2 consecutive fiscal years (1996 and 1997). The ASI was administered to a sample of 178 treatment clients at treatment entry and at 7-months postadmission. The DATCAP and ASI appear to have significant potential for contributing to an economic evaluation of addiction treatment. The benefit-cost analysis and subsequent sensitivity analysis all showed that total economic benefit was greater than total economic cost at the three outpatient programs, but this representative application is meant to stimulate future economic research rather than justifying treatment per se. This study used previously validated, research-proven instruments and methods to perform a practical benefit-cost analysis of real-world treatment programs. The study demonstrates one way to combine economic and clinical data and offers a methodological foundation for future economic evaluations of addiction treatment.
Wäscher, Sebastian; Salloch, Sabine; Ritter, Peter; Vollmann, Jochen; Schildmann, Jan
2017-05-01
This article describes a process of developing, implementing and evaluating a clinical ethics support service intervention with the goal of building up a context-sensitive structure of minimal clinical-ethics in an oncology department without prior clinical ethics structure. Scholars from different disciplines have called for an improvement in the evaluation of clinical ethics support services (CESS) for different reasons over several decades. However, while a lot has been said about the concepts and methodological challenges of evaluating CESS up to the present time, relatively few empirical studies have been carried out. The aim of this article is twofold. On the one hand, it describes a process of development, modifying and evaluating a CESS intervention as part of the ETHICO research project, using the approach of qualitative-formative evaluation. On the other hand, it provides a methodological analysis which specifies the contribution of qualitative empirical methods to the (formative) evaluation of CESS. We conclude with a consideration of the strengths and limitations of qualitative evaluation research with regards to the evaluation and development of context sensitive CESS. We further discuss our own approach in contrast to rather traditional consult or committee models. © 2017 John Wiley & Sons Ltd.
Nuclear Data Needs for Generation IV Nuclear Energy Systems
NASA Astrophysics Data System (ADS)
Rullhusen, Peter
2006-04-01
Nuclear data needs for generation IV systems. Future of nuclear energy and the role of nuclear data / P. Finck. Nuclear data needs for generation IV nuclear energy systems-summary of U.S. workshop / T. A. Taiwo, H. S. Khalil. Nuclear data needs for the assessment of gen. IV systems / G. Rimpault. Nuclear data needs for generation IV-lessons from benchmarks / S. C. van der Marck, A. Hogenbirk, M. C. Duijvestijn. Core design issues of the supercritical water fast reactor / M. Mori ... [et al.]. GFR core neutronics studies at CEA / J. C. Bosq ... [et al]. Comparative study on different phonon frequency spectra of graphite in GCR / Young-Sik Cho ... [et al.]. Innovative fuel types for minor actinides transmutation / D. Haas, A. Fernandez, J. Somers. The importance of nuclear data in modeling and designing generation IV fast reactors / K. D. Weaver. The GIF and Mexico-"everything is possible" / C. Arrenondo Sánchez -- Benmarks, sensitivity calculations, uncertainties. Sensitivity of advanced reactor and fuel cycle performance parameters to nuclear data uncertainties / G. Aliberti ... [et al.]. Sensitivity and uncertainty study for thermal molten salt reactors / A. Biduad ... [et al.]. Integral reactor physics benchmarks- The International Criticality Safety Benchmark Evaluation Project (ICSBEP) and the International Reactor Physics Experiment Evaluation Project (IRPHEP) / J. B. Briggs, D. W. Nigg, E. Sartori. Computer model of an error propagation through micro-campaign of fast neutron gas cooled nuclear reactor / E. Ivanov. Combining differential and integral experiments on [symbol] for reducing uncertainties in nuclear data applications / T. Kawano ... [et al.]. Sensitivity of activation cross sections of the Hafnium, Tanatalum and Tungsten stable isotopes to nuclear reaction mechanisms / V. Avrigeanu ... [et al.]. Generating covariance data with nuclear models / A. J. Koning. Sensitivity of Candu-SCWR reactors physics calculations to nuclear data files / K. S. Kozier, G. R. Dyck. The lead cooled fast reactor benchmark BREST-300: analysis with sensitivity method / V. Smirnov ... [et al.]. Sensitivity analysis of neutron cross-sections considered for design and safety studies of LFR and SFR generation IV systems / K. Tucek, J. Carlsson, H. Wider -- Experiments. INL capabilities for nuclear data measurements using the Argonne intense pulsed neutron source facility / J. D. Cole ... [et al.]. Cross-section measurements in the fast neutron energy range / A. Plompen. Recent measurements of neutron capture cross sections for minor actinides by a JNC and Kyoto University Group / H. Harada ... [et al.]. Determination of minor actinides fission cross sections by means of transfer reactions / M. Aiche ... [et al.] -- Evaluated data libraries. Nuclear data services from the NEA / H. Henriksson, Y. Rugama. Nuclear databases for energy applications: an IAEA perspective / R. Capote Noy, A. L. Nichols, A. Trkov. Nuclear data evaluation for generation IV / G. Noguère ... [et al.]. Improved evaluations of neutron-induced reactions on americium isotopes / P. Talou ... [et al.]. Using improved ENDF-based nuclear data for candu reactor calculations / J. Prodea. A comparative study on the graphite-moderated reactors using different evaluated nuclear data / Do Heon Kim ... [et al.].
Lateral directional requirements for a low L/D aeromaneuvering orbital transfer vehicle
NASA Technical Reports Server (NTRS)
Gamble, J. D.; Spratlin, K. M.; Skalecki, L. M.
1984-01-01
The lateral-directional aerodynamics and control requirements for a low L/D (0.3) aeromaneuvering orbital transfer vehicle are evaluated. A lateral directional RCS control concept that permits a linearized analysis is utilized to evaluate the effect of Dutch Roll frequency and damping on the atmospheric guidance and control performance. The bank rate and acceleration requirements for acceptable performance are defined and the sensitivity to a parameter similar to the lateral control departure parameter but involving the RCS jets is evaluated.
Cacho, J; Sevillano, J; de Castro, J; Herrera, E; Ramos, M P
2008-11-01
Insulin resistance plays a role in the pathogenesis of diabetes, including gestational diabetes. The glucose clamp is considered the gold standard for determining in vivo insulin sensitivity, both in human and in animal models. However, the clamp is laborious, time consuming and, in animals, requires anesthesia and collection of multiple blood samples. In human studies, a number of simple indexes, derived from fasting glucose and insulin levels, have been obtained and validated against the glucose clamp. However, these indexes have not been validated in rats and their accuracy in predicting altered insulin sensitivity remains to be established. In the present study, we have evaluated whether indirect estimates based on fasting glucose and insulin levels are valid predictors of insulin sensitivity in nonpregnant and 20-day-pregnant Wistar and Sprague-Dawley rats. We have analyzed the homeostasis model assessment of insulin resistance (HOMA-IR), the quantitative insulin sensitivity check index (QUICKI), and the fasting glucose-to-insulin ratio (FGIR) by comparing them with the insulin sensitivity (SI(Clamp)) values obtained during the hyperinsulinemic-isoglycemic clamp. We have performed a calibration analysis to evaluate the ability of these indexes to accurately predict insulin sensitivity as determined by the reference glucose clamp. Finally, to assess the reliability of these indexes for the identification of animals with impaired insulin sensitivity, performance of the indexes was analyzed by receiver operating characteristic (ROC) curves in Wistar and Sprague-Dawley rats. We found that HOMA-IR, QUICKI, and FGIR correlated significantly with SI(Clamp), exhibited good sensitivity and specificity, accurately predicted SI(Clamp), and yielded lower insulin sensitivity in pregnant than in nonpregnant rats. Together, our data demonstrate that these indexes provide an easy and accurate measure of insulin sensitivity during pregnancy in the rat.
NASA Astrophysics Data System (ADS)
Chen, Shichao; Zhu, Yizheng
2017-02-01
Sensitivity is a critical index to measure the temporal fluctuation of the retrieved optical pathlength in quantitative phase imaging system. However, an accurate and comprehensive analysis for sensitivity evaluation is still lacking in current literature. In particular, previous theoretical studies for fundamental sensitivity based on Gaussian noise models are not applicable to modern cameras and detectors, which are dominated by shot noise. In this paper, we derive two shot noiselimited theoretical sensitivities, Cramér-Rao bound and algorithmic sensitivity for wavelength shifting interferometry, which is a major category of on-axis interferometry techniques in quantitative phase imaging. Based on the derivations, we show that the shot noise-limited model permits accurate estimation of theoretical sensitivities directly from measured data. These results can provide important insights into fundamental constraints in system performance and can be used to guide system design and optimization. The same concepts can be generalized to other quantitative phase imaging techniques as well.
Sun, Changling; Han, Xue; Li, Xiaoying; Zhang, Yayun; Du, Xiaodong
2017-04-01
Objective To evaluate the performance of narrow band imaging (NBI) for the diagnosis of laryngeal cancer and to compare the diagnostic value of NBI with that of white light endoscopy. Data Sources PubMed, Embase, Cochrane Library, and CNKI databases. Review Methods Data analyses were performed with Meta-DiSc. The updated Quality Assessment of Diagnostic Accuracy Studies-2 tool was used to assess study quality and potential bias. Publication bias was assessed with the Deeks's asymmetry test. The protocol used in this article has been published on PROSPERO and is in accordance with the PRISMA checklist. The registry number for this study is CRD42015025866. Results Six studies including 716 lesions were included in this meta-analysis. The pooled sensitivity, specificity, and diagnostic odds ratio for the NBI diagnosis of laryngeal cancer were 0.94 (95% confidence interval [95% CI]: 0.91-0.96), 0.89 (95% CI: 0.85-0.92), and 142.12 (95% CI: 46.42-435.15), respectively, and the area under receiver operating characteristics curve was 0.97. Among the 6 studies, 3 evaluated the diagnostic value of white light endoscopy, with a sensitivity of 0.81 (95% CI: 0.76-0.86), a specificity of 0.92 (95% CI: 0.88-0.95), and a diagnostic odds ratio of 33.82 (95% CI: 14.76-77.49). The evaluation of heterogeneity, calculated per the diagnostic odds ratio, gave an I 2 of 66%. No marked publication bias ( P = .84) was detected in this meta-analysis. Conclusion The sensitivity of NBI is superior to white light endoscopy, and the potential value of NBI needs to be validated in future studies.
Accuracy of endoscopic and videofluoroscopic evaluations of swallowing for oropharyngeal dysphagia.
Giraldo-Cadavid, Luis Fernando; Leal-Leaño, Lorena Renata; Leon-Basantes, Guillermo Alfredo; Bastidas, Alirio Rodrigo; Garcia, Rafael; Ovalle, Sergio; Abondano-Garavito, Jorge E
2017-09-01
A systematic review and meta-analysis of the literature was conducted to compare the accuracy with which flexible endoscopic evaluation of swallowing (FEES) and videofluoroscopic swallowing study (VFSS) assessed oropharyngeal dysphagia in adults. PubMed, Embase, and the Latin American and Caribbean Health Sciences Literature (LILACS) database. A review of published studies was conducted in parallel by two groups of researchers. We evaluated the methodological quality, homogeneity, threshold effect, and publication bias. The results are presented as originally published, then with each test compared against the other as a reference and both compared against a composite reference standard, and then pooled using a random effects model. Software use consisted of Meta-DiSc and SPSS. The search yielded 5,697 articles. Fifty-two articles were reviewed in full text, and six articles were included in the meta-analysis. FEES showed greater sensitivity than VFSS for aspiration (0.88 vs. 0.77; P = .03), penetration (0.97 vs. 0.83; P = .0002), and laryngopharyngeal residues (0.97 vs. 0.80; P < .0001). Sensitivity to detect pharyngeal premature spillage was similar for both tests (VFSS: 0.80; FEES: 0.69; P = .28). The specificities of both tests were similar (range, 0.93-0.98). In the sensitivity analysis there were statistically significant differences between the tests regarding residues but only marginally significant differences regarding aspiration and penetration. FEES had a slight advantage over VFSS to detect aspiration, penetration, and residues. Prospective studies comparing both tests against an appropriate reference standard are needed to define which test has greater accuracy. 2a Laryngoscope, 127:2002-2010, 2017. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.
Santurtún, Ana; Riancho, José A; Arozamena, Jana; López-Duarte, Mónica; Zarrabeitia, María T
2017-01-01
Several methods have been developed to determinate genetic profiles from a mixed samples and chimerism analysis in transplanted patients. The aim of this study was to explore the effectiveness of using the droplet digital PCR (ddPCR) for mixed chimerism detection (a mixture of genetic profiles resulting after allogeneic hematopoietic stem cell transplantation (HSCT)). We analyzed 25 DNA samples from patients who had undergone HSCT and compared the performance of ddPCR and two established methods for chimerism detection, based upon the Indel and STRs analysis, respectively. Additionally, eight artificial mixture DNA samples were created to evaluate the sensibility of ddPCR. Our results show that the chimerism percentages estimated by the analysis of a single Indel using ddPCR were very similar to those calculated by the amplification of 15 STRs (r 2 = 0.970) and with the results obtained by the amplification of 38 Indels (r 2 = 0.975). Moreover, the amplification of a single Indel by ddPCR was sensitive enough to detect a minor DNA contributor comprising down to 0.5 % of the sample. We conclude that ddPCR can be a powerful tool for the determination of a genetic profile of forensic mixtures and clinical chimerism analysis when traditional techniques are not sensitive enough.
Mallorie, Amy; Goldring, James; Patel, Anant; Lim, Eric; Wagner, Thomas
2017-08-01
Lymph node involvement in non-small-cell lung cancer (NSCLC) is a major factor in determining management and prognosis. We aimed to evaluate the accuracy of fluorine-18-fluorodeoxyglucose-PET/computed tomography (CT) for the assessment of nodal involvement in patients with NSCLC. In this retrospective study, we included 61 patients with suspected or confirmed resectable NSCLC over a 2-year period from April 2013 to April 2015. 221 nodes with pathological staging from surgery or endobronchial ultrasound-guided transbronchial needle aspiration were assessed using a nodal station-based analysis with original clinical reports and three different cut-offs: mediastinal blood pool (MBP), liver background and tumour standardized uptake value maximal (SUVmax)/2. Using nodal station-based analysis for activity more than tumour SUVmax/2, the sensitivity was 45%, the specificity was 89% and the negative predictive value (NPV) was 87%. For activity more than MBP, the sensitivity was 93%, the specificity was 72% and NPV was 98%. For activity more than liver background, the sensitivity was 83%, the specificity was 84% and NPV was 96%. Using a nodal staging-based analysis for accuracy at detecting N2/3 disease, for activity more than tumour SUVmax/2, the sensitivity was 59%, the specificity was 85% and NPV was 80%. For activity more than MBP, the sensitivity was 95%, the specificity was 61% and NPV was 96%. For activity more than liver background, the sensitivity was 86%, the specificity was 81% and NPV was 92%. Receiver-operating characteristic analysis showed the optimal nodal SUVmax to be more than 6.4 with a sensitivity of 45% and a specificity of 95%, with an area under the curve of 0.85. Activity more than MBP was the most sensitive cut-off with the highest sensitivity and NPV. Activity more than primary tumour SUVmax/2 was the most specific cut-off. Nodal SUVmax more than 6.4 has a high specificity of 95%.
Soleimani, Robabeh; Salehi, Zivar; Soltanipour, Soheil; Hasandokht, Tolou; Jalali, Mir Mohammad
2018-04-01
Methylphenidate (MPH) is the most commonly used treatment for attention-deficit hyperactivity disorder (ADHD) in children. However, the response to MPH is not similar in all patients. This meta-analysis investigated the potential role of SLC6A3 polymorphisms in response to MPH in children with ADHD. Clinical trials or naturalistic studies were selected from electronic databases. A meta-analysis was conducted using a random-effects model. Cohen's d effect size and 95% confidence intervals (CIs) were determined. Sensitivity analysis and meta-regression were performed. Q-statistic and Egger's tests were conducted to evaluate heterogeneity and publication bias, respectively. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was used to assess the quality of evidence. Sixteen studies with follow-up periods of 1-28 weeks were eligible. The mean treatment acceptability of MPH was 97.2%. In contrast to clinical trials, the meta-analysis of naturalistic studies indicated that children without 10/10 repeat carriers had better response to MPH (Cohen's d: -0.09 and 0.44, respectively). The 9/9 repeat polymorphism had no effect on the response rate (Cohen's d: -0.43). In the meta-regression, a significant association was observed between baseline severity of ADHD, MPH dosage, and combined type of ADHD in some genetic models. Sensitivity analysis indicated the robustness of our findings. No publication bias was observed in our meta-analysis. The GRADE evaluations revealed very low levels of confidence for each outcome of response to MPH. The results of clinical trials and naturalistic studies regarding the effect size between different polymorphisms of SLC6A3 were contradictory. Therefore, further research is recommended. © 2017 Wiley Periodicals, Inc.
Economic Efficiency and Investment Timing for Dual Water Systems
NASA Astrophysics Data System (ADS)
Leconte, Robert; Hughes, Trevor C.; Narayanan, Rangesan
1987-10-01
A general methodology to evaluate the economic feasibility of dual water systems is presented. In a first step, a static analysis (evaluation at a single point in time) is developed. The analysis requires the evaluation of consumers' and producer's surpluses from water use and the capital cost of the dual (outdoor) system. The analysis is then extended to a dynamic approach where the water demand increases with time (as a result of a population increase) and where the dual system is allowed to expand. The model determines whether construction of a dual system represents a net benefit, and if so, what is the best time to initiate the system (corresponding to maximization of social welfare). Conditions under which an analytic solution is possible are discussed and results of an application are summarized (including sensitivity to different parameters). The analysis allows identification of key parameters influencing attractiveness of dual water systems.
Huang, Wei; Altaf, Kiran; Jin, Tao; Xiong, Jun-Jie; Wen, Li; Javed, Muhammad A; Johnstone, Marianne; Xue, Ping; Halloran, Christopher M; Xia, Qing
2013-01-01
AIM: To undertake a meta-analysis on the value of urinary trypsinogen activation peptide (uTAP) in predicting severity of acute pancreatitis on admission. METHODS: Major databases including Medline, Embase, Science Citation Index Expanded and the Cochrane Central Register of Controlled Trials in the Cochrane Library were searched to identify all relevant studies from January 1990 to January 2013. Pooled sensitivity, specificity and the diagnostic odds ratios (DORs) with 95%CI were calculated for each study and were compared to other systems/biomarkers if mentioned within the same study. Summary receiver-operating curves were conducted and the area under the curve (AUC) was evaluated. RESULTS: In total, six studies of uTAP with a cut-off value of 35 nmol/L were included in this meta-analysis. Overall, the pooled sensitivity and specificity of uTAP for predicting severity of acute pancreatitis, at time of admission, was 71% and 75%, respectively (AUC = 0.83, DOR = 8.67, 95%CI: 3.70-20.33). When uTAP was compared with plasma C-reactive protein, the pooled sensitivity, specificity, AUC and DOR were 0.64 vs 0.67, 0.77 vs 0.75, 0.82 vs 0.79 and 6.27 vs 6.32, respectively. Similarly, the pooled sensitivity, specificity, AUC and DOR of uTAP vs Acute Physiology and Chronic Health Evaluation II within the first 48 h of admission were found to be 0.64 vs 0.69, 0.77 vs 0.61, 0.82 vs 0.73 and 6.27 vs 4.61, respectively. CONCLUSION: uTAP has the potential to act as a stratification marker on admission for differentiating disease severity of acute pancreatitis. PMID:23901239
Parente, Diane M; Cunha, Cheston B; Mylonakis, Eleftherios; Timbrook, Tristan T
2018-01-11
Recent literature has highlighted MRSA nasal screening as a possible antimicrobial stewardship program (ASP) tool for avoiding unnecessary empiric MRSA therapy for pneumonia, yet current guidelines recommend MRSA therapy based on risk factors. The objective of this meta-analysis was to evaluate the diagnostic value of MRSA nasal screening in MRSA pneumonia. Pubmed and EMBASE were searched from inception to November 2016 for English studies evaluating MRSA nasal screening and development of MRSA pneumonia. Data analysis was performed using a bivariate random-effects model to estimate pooled sensitivity, specificity, and positive (PPV) and negative (NPV) predictive values. Twenty-two studies, comprising of 5,163 patients met our inclusion criteria. Pooled sensitivity and specificity of MRSA nares screen for all MRSA pneumonia types was 70.9% and 90.3%, respectively. With a 10% prevalence of potential MRSA pneumonia, the calculated PPV was 44.8% while the NPV was 96.5%. The pooled sensitivity and specificity for MRSA community-acquired pneumonia (CAP) and healthcare-associated pneumonia (HCAP) were at 85% and 92.1%, respectively. For CAP and HCAP both the PPV and NPV increased to 56.8% and 98.1%, respectively. In comparison, for MRSA ventilated-associated pneumonia (VAP), the sensitivity, specificity, PPV, NPV was 40.3%, 93.7%, 35.7%, and 94.8%, respectively. Nares screening for MRSA had a high specificity and NPV for ruling out MRSA pneumonia, particularly in cases of CAP/HCAP. Based on the NPV, utilization of MRSA nares screening is a valuable tool for AMS to streamline empiric antibiotic therapy, especially among patients with pneu. © The Author(s) 2018. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail: journals.permissions@oup.com.
Duvivier, Wilco F; van Beek, Teris A; Nielen, Michel W F
2016-11-15
Recently, several direct and/or ambient mass spectrometry (MS) approaches have been suggested for drugs of abuse imaging in hair. The use of mass spectrometers with insufficient selectivity could result in false-positive measurements due to isobaric interferences. Different mass analyzers have been evaluated regarding their selectivity and sensitivity for the detection of Δ9-tetrahydrocannabinol (THC) from intact hair samples using direct analysis in real time (DART) ionization. Four different mass analyzers, namely (1) an orbitrap, (2) a quadrupole orbitrap, (3) a triple quadrupole, and (4) a quadrupole time-of-flight (QTOF), were evaluated. Selectivity and sensitivity were assessed by analyzing secondary THC standard dilutions on stainless steel mesh screens and blank hair samples, and by the analysis of authentic cannabis user hair samples. Additionally, separation of isobaric ions by use of travelling wave ion mobility (TWIM) was investigated. The use of a triple quadrupole instrument resulted in the highest sensitivity; however, transitions used for multiple reaction monitoring were only found to be specific when using high mass resolution product ion measurements. A mass resolution of at least 30,000 FWHM at m/z 315 was necessary to avoid overlap of THC with isobaric ions originating from the hair matrix. Even though selectivity was enhanced by use of TWIM, the QTOF instrument in resolution mode could not indisputably differentiate THC from endogenous isobaric ions in drug user hair samples. Only the high resolution of the (quadrupole) orbitrap instruments and the QTOF instrument in high-resolution mode distinguished THC in hair samples from endogenous isobaric interferences. As expected, enhanced selectivity compromises sensitivity and THC was only detectable in hair from heavy users. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Kepha, Stella; Kihara, Jimmy H.; Njenga, Sammy M.; Pullan, Rachel L.; Brooker, Simon J.
2014-01-01
Objectives This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Methods Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Results Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. Conclusions The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings. PMID:24810593
Jethwa, Pinakin R; Punia, Vineet; Patel, Tapan D; Duffis, E Jesus; Gandhi, Chirag D; Prestigiacomo, Charles J
2013-04-01
Recent studies have documented the high sensitivity of computed tomography angiography (CTA) in detecting a ruptured aneurysm in the presence of acute subarachnoid hemorrhage (SAH). The practice of digital subtraction angiography (DSA) when CTA does not reveal an aneurysm has thus been called into question. We examined this dilemma from a cost-effectiveness perspective by using current decision analysis techniques. A decision tree was created with the use of TreeAge Pro Suite 2012; in 1 arm, a CTA-negative SAH was followed up with DSA; in the other arm, patients were observed without further imaging. Based on literature review, costs and utilities were assigned to each potential outcome. Base-case and sensitivity analyses were performed to determine the cost-effectiveness of each strategy. A Monte Carlo simulation was then conducted by sampling each variable over a plausible distribution to evaluate the robustness of the model. With the use of a negative predictive value of 95.7% for CTA, observation was found to be the most cost-effective strategy ($6737/Quality Adjusted Life Year [QALY] vs $8460/QALY) in the base-case analysis. One-way sensitivity analysis demonstrated that DSA became the more cost-effective option if the negative predictive value of CTA fell below 93.72%. The Monte Carlo simulation produced an incremental cost-effectiveness ratio of $83 083/QALY. At the conventional willingness-to-pay threshold of $50 000/QALY, observation was the more cost-effective strategy in 83.6% of simulations. The decision to perform a DSA in CTA-negative SAH depends strongly on the sensitivity of CTA, and therefore must be evaluated at each center treating these types of patients. Given the high sensitivity of CTA reported in the current literature, performing DSA on all patients with CTA negative SAH may not be cost-effective at every institution.
Assefa, Liya M; Crellen, Thomas; Kepha, Stella; Kihara, Jimmy H; Njenga, Sammy M; Pullan, Rachel L; Brooker, Simon J
2014-05-01
This study evaluates the diagnostic accuracy and cost-effectiveness of the Kato-Katz and Mini-FLOTAC methods for detection of soil-transmitted helminths (STH) in a post-treatment setting in western Kenya. A cost analysis also explores the cost implications of collecting samples during school surveys when compared to household surveys. Stool samples were collected from children (n = 652) attending 18 schools in Bungoma County and diagnosed by the Kato-Katz and Mini-FLOTAC coprological methods. Sensitivity and additional diagnostic performance measures were analyzed using Bayesian latent class modeling. Financial and economic costs were calculated for all survey and diagnostic activities, and cost per child tested, cost per case detected and cost per STH infection correctly classified were estimated. A sensitivity analysis was conducted to assess the impact of various survey parameters on cost estimates. Both diagnostic methods exhibited comparable sensitivity for detection of any STH species over single and consecutive day sampling: 52.0% for single day Kato-Katz; 49.1% for single-day Mini-FLOTAC; 76.9% for consecutive day Kato-Katz; and 74.1% for consecutive day Mini-FLOTAC. Diagnostic performance did not differ significantly between methods for the different STH species. Use of Kato-Katz with school-based sampling was the lowest cost scenario for cost per child tested ($10.14) and cost per case correctly classified ($12.84). Cost per case detected was lowest for Kato-Katz used in community-based sampling ($128.24). Sensitivity analysis revealed the cost of case detection for any STH decreased non-linearly as prevalence rates increased and was influenced by the number of samples collected. The Kato-Katz method was comparable in diagnostic sensitivity to the Mini-FLOTAC method, but afforded greater cost-effectiveness. Future work is required to evaluate the cost-effectiveness of STH surveillance in different settings.
Local connected fractal dimension analysis in gill of fish experimentally exposed to toxicants.
Manera, Maurizio; Giari, Luisa; De Pasquale, Joseph A; Sayyaf Dezfuli, Bahram
2016-06-01
An operator-neutral method was implemented to objectively assess European seabass, Dicentrarchus labrax (Linnaeus, 1758) gill pathology after experimental exposure to cadmium (Cd) and terbuthylazine (TBA) for 24 and 48h. An algorithm-derived local connected fractal dimension (LCFD) frequency measure was used in this comparative analysis. Canonical variates (CVA) and linear discriminant analysis (LDA) were used to evaluate the discrimination power of the method among exposure classes (unexposed, Cd exposed, TBA exposed). Misclassification, sensitivity and specificity, both with original and cross-validated cases, were determined. LCFDs frequencies enhanced the differences among classes which were visually selected after their means, respective variances and the differences between Cd and TBA exposed means, with respect to unexposed mean, were analyzed by scatter plots. Selected frequencies were then scanned by means of LDA, stepwise analysis, and Mahalanobis distance to detect the most discriminative frequencies out of ten originally selected. Discrimination resulted in 91.7% of cross-validated cases correctly classified (22 out of 24 total cases), with sensitivity and specificity, respectively, of 95.5% (1 false negative with respect to 21 really positive cases) and 75% (1 false positive with respect to 3 really negative cases). CVA with convex hull polygons ensured prompt, visually intuitive discrimination among exposure classes and graphically supported the false positive case. The combined use of semithin sections, which enhanced the visual evaluation of the overall lamellar structure; of LCFD analysis, which objectively detected local variation in complexity, without the possible bias connected to human personnel; and of CVA/LDA, could be an objective, sensitive and specific approach to study fish gill lamellar pathology. Furthermore this approach enabled discrimination with sufficient confidence between exposure classes or pathological states and avoided misdiagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Zhang, Peige; Zhang, Li; Zheng, Shaoping; Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek's funnel plot. Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83-0.91) and 0.88 (95% CI, 0.82-0.92), respectively. The AUC was 0.93 (95% CI, 0.90-0.95). The pooled DOR was 49.59 (95% CI, 26.11-94.15). Deek's funnel plot revealed no significant publication bias. ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity.
Yu, Cheng; Xie, Mingxing; Lv, Qing
2016-01-01
Objective To evaluate the overall performance of acoustic radiation force impulse imaging (ARFI) in differentiating between benign and malignant lymph nodes (LNs) by conducting a meta-analysis. Methods PubMed, Embase, Web of Science, the Cochrane Library and the China National Knowledge Infrastructure were comprehensively searched for potential studies through August 13th, 2016. Studies that investigated the diagnostic power of ARFI for the differential diagnosis of benign and malignant LNs by using virtual touch tissue quantification (VTQ) or virtual touch tissue imaging quantification (VTIQ) were collected. The included articles were published in English or Chinese. Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) was used to evaluate the methodological quality. The pooled sensitivity, specificity, and the area under the summary receiver operating characteristic (SROC) curve (AUC) were calculated by means of a bivariate mixed-effects regression model. Meta-regression analysis was performed to identify the potential sources of between study heterogeneity. Fagan plot analysis was used to explore the clinical utilities. Publication bias was assessed using Deek’s funnel plot. Results Nine studies involving 1084 LNs from 929 patients were identified to analyze in the meta-analysis. The summary sensitivity and specificity of ARFI in detecting malignant LNs were 0.87 (95% confidence interval [CI], 0.83–0.91) and 0.88 (95% CI, 0.82–0.92), respectively. The AUC was 0.93 (95% CI, 0.90–0.95). The pooled DOR was 49.59 (95% CI, 26.11–94.15). Deek’s funnel plot revealed no significant publication bias. Conclusion ARFI is a promising tool for the differentiation of benign and malignant LNs with high sensitivity and specificity. PMID:27855188
Wan, Bing; Wang, Siqi; Tu, Mengqi; Wu, Bo; Han, Ping; Xu, Haibo
2017-03-01
The purpose of this meta-analysis was to evaluate the diagnostic accuracy of perfusion magnetic resonance imaging (MRI) as a method for differentiating glioma recurrence from pseudoprogression. The PubMed, Embase, Cochrane Library, and Chinese Biomedical databases were searched comprehensively for relevant studies up to August 3, 2016 according to specific inclusion and exclusion criteria. The quality of the included studies was assessed according to the quality assessment of diagnostic accuracy studies (QUADAS-2). After performing heterogeneity and threshold effect tests, pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were calculated. Publication bias was evaluated visually by a funnel plot and quantitatively using Deek funnel plot asymmetry test. The area under the summary receiver operating characteristic curve was calculated to demonstrate the diagnostic performance of perfusion MRI. Eleven studies covering 416 patients and 418 lesions were included in this meta-analysis. The pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.88 (95% confidence interval [CI] 0.84-0.92), 0.77 (95% CI 0.69-0.84), 3.93 (95% CI 2.83-5.46), 0.16 (95% CI 0.11-0.22), and 27.17 (95% CI 14.96-49.35), respectively. The area under the summary receiver operating characteristic curve was 0.8899. There was no notable publication bias. Sensitivity analysis showed that the meta-analysis results were stable and credible. While perfusion MRI is not the ideal diagnostic method for differentiating glioma recurrence from pseudoprogression, it could improve diagnostic accuracy. Therefore, further research on combining perfusion MRI with other imaging modalities is warranted.
Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei
2012-01-01
Background To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). Methods and Findings A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Conclusions Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC. PMID:22412884
Wu, Bin; Dong, Baijun; Xu, Yuejuan; Zhang, Qiang; Shen, Jinfang; Chen, Huafeng; Xue, Wei
2012-01-01
To estimate, from the perspective of the Chinese healthcare system, the economic outcomes of five different first-line strategies among patients with metastatic renal cell carcinoma (mRCC). A decision-analytic model was developed to simulate the lifetime disease course associated with renal cell carcinoma. The health and economic outcomes of five first-line strategies (interferon-alfa, interleukin-2, interleukin-2 plus interferon-alfa, sunitinib and bevacizumab plus interferon-alfa) were estimated and assessed by indirect comparison. The clinical and utility data were taken from published studies. The cost data were estimated from local charge data and current Chinese practices. Sensitivity analyses were used to explore the impact of uncertainty regarding the results. The impact of the sunitinib patient assistant program (SPAP) was evaluated via scenario analysis. The base-case analysis showed that the sunitinib strategy yielded the maximum health benefits: 2.71 life years and 1.40 quality-adjusted life-years (QALY). The marginal cost-effectiveness (cost per additional QALY) gained via the sunitinib strategy compared with the conventional strategy was $220,384 (without SPAP, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated) and $16,993 (with SPAP, interferon-alfa, interleukin-2 plus interferon-alfa and bevacizumab plus interferon-alfa were dominated). In general, the results were sensitive to the hazard ratio of progression-free survival. The probabilistic sensitivity analysis demonstrated that the sunitinib strategy with SPAP was the most cost-effective approach when the willingness-to-pay threshold was over $16,000. Our analysis suggests that traditional cytokine therapy is the cost-effective option in the Chinese healthcare setting. In some relatively developed regions, sunitinib with SPAP may be a favorable cost-effective alternative for mRCC.
Neutron Physics Division progress report for period ending February 28, 1977
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maienschein, F.C.
1977-05-01
Summaries are given of research progress in the following areas: (1) measurements of cross sections and related quantities, (2) cross section evaluations and theory, (3) cross section processing, testing, and sensitivity analysis, (4) integral experiments and their analyses, (5) development of methods for shield and reactor analyses, (6) analyses for specific systems or applications, and (7) information analysis and distribution. (SDF)
An Earthquake Source Sensitivity Analysis for Tsunami Propagation in the Eastern Mediterranean
NASA Astrophysics Data System (ADS)
Necmioglu, Ocal; Meral Ozel, Nurcan
2013-04-01
An earthquake source parameter sensitivity analysis for tsunami propagation in the Eastern Mediterranean has been performed based on 8 August 1303 Crete and Dodecanese Islands earthquake resulting in destructive inundation in the Eastern Mediterranean. The analysis involves 23 cases describing different sets of strike, dip, rake and focal depth, while keeping the fault area and displacement, thus the magnitude, same. The main conclusions of the evaluation are drawn from the investigation of the wave height distributions at Tsunami Forecast Points (TFP). The earthquake vs. initial tsunami source parameters comparison indicated that the maximum initial wave height values correspond in general to the changes in rake angle. No clear depth dependency is observed within the depth range considered and no strike angle dependency is observed in terms of amplitude change. Directivity sensitivity analysis indicated that for the same strike and dip, 180° shift in rake may lead to 20% change in the calculated tsunami wave height. Moreover, an approximately 10 min difference in the arrival time of the initial wave has been observed. These differences are, however, greatly reduced in the far field. The dip sensitivity analysis, performed separately for thrust and normal faulting, has both indicated that an increase in the dip angle results in the decrease of the tsunami wave amplitude in the near field approximately 40%. While a positive phase shift is observed, the period and the shape of the initial wave stays nearly the same for all dip angles at respective TFPs. These affects are, however, not observed at the far field. The resolution of the bathymetry, on the other hand, is a limiting factor for further evaluation. Four different cases were considered for the depth sensitivity indicating that within the depth ranges considered (15-60 km), the increase of the depth has only a smoothing effect on the synthetic tsunami wave height measurements at the selected TFPs. The strike sensitivity analysis showed clear phase shift with respect to the variation of the strike angles, without leading to severe variation of the initial and maximum waves at locations considered. Travel time maps for two cases corresponding to difference in the strike value (60° vs 150°) presented a more complex wave propagation for the case with 60° strike angle due to the fact that the normal of the fault plane is orthogonal to the main bathymetric structure in the region, namely the Eastern section of the Hellenic Arc between Crete and Rhodes Islands. For a given set of strike, dip and focal depth parameters, the effect of the variation in the rake angle has been evaluated in the rake sensitivity analysis. A waveform envelope composed of symmetric synthetic recordings at one TFPs could be clearly observed as a result of rake angle variations in 0-180° range. This could also lead to the conclusion that for a given magnitude (fault size and displacement), the expected maximum and minimum tsunami wave amplitudes could be evaluated as a waveform envelope rather limited to a single point of time or amplitude. The Evaluation of the initial wave arrival times follows an expected pattern controlled by the distance, wheras maximum wave arrival time distribution presents no clear pattern. Nevertheless, the distribution is rather concentrated in time domain for some TFPs. Maximum positive and minimum negative wave amplitude distributions indicates a broader range for a subgroup of TFPs, wheras for the remaining TFPs the distributions are narrow. Any deviation from the expected trend of calculating narrower ranges of amplitude distributions could be interpreted as the result o the bathymetry and focusing effects. As similar studies conducted in the different parts of the globe indicated, the main characteristics of the tsunami propagation are unique for each basin. It should be noted, however, that the synthetic measurements obtained at the TFPs in the absence of high-resolution bathymetric data, should be considered only an overall guidance. The results indicate the importance of the accuracy of earthquake source parameters for reliable tsunami predictions and the need for high-resolution bathymetric data to be able to perform calculations with higher accuracy. On the other hand, this study did not address other parameters, such as heterogeneous slip distribution and rupture duration, which affect the tsunami initiation and propagation process.
NASA Astrophysics Data System (ADS)
Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin
2014-07-01
Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.
Genetics and clinical response to warfarin and edoxaban in patients with venous thromboembolism.
Vandell, Alexander G; Walker, Joseph; Brown, Karen S; Zhang, George; Lin, Min; Grosso, Michael A; Mercuri, Michele F
2017-11-01
The aim of this study was to investigate whether genetic variants can identify patients with venous thromboembolism (VTE) at an increased risk of bleeding with warfarin. Hokusai-venous thromboembolism (Hokusai VTE), a randomised, multinational, double-blind, non-inferiority trial, evaluated the safety and efficacy of edoxaban versus warfarin in patients with VTE initially treated with heparin. In this subanalysis of Hokusai VTE, patients genotyped for variants in CYP2C9 and VKORC1 genes were divided into three warfarin sensitivity types (normal, sensitive and highly sensitive) based on their genotypes. An exploratory analysis was also conducted comparing normal responders to pooled sensitive responders (ie, sensitive and highly sensitive responders). The analysis included 47.7% (3956/8292) of the patients in Hokusai VTE. Among 1978 patients randomised to warfarin, 63.0% (1247) were normal responders, 34.1% (675) were sensitive responders and 2.8% (56) were highly sensitive responders. Compared with normal responders, sensitive and highly sensitive responders had heparin therapy discontinued earlier (p<0.001), had a decreased final weekly warfarin dose (p<0.001), spent more time overanticoagulated (p<0.001) and had an increased bleeding risk with warfarin (sensitive responders HR 1.38 [95% CI 1.11 to 1.71], p=0.0035; highly sensitive responders 1.79 [1.09 to 2.99]; p=0.0252). In this study, CYP2C9 and VKORC1 genotypes identified patients with VTE at increased bleeding risk with warfarin. NCT00986154. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Space system operations and support cost analysis using Markov chains
NASA Technical Reports Server (NTRS)
Unal, Resit; Dean, Edwin B.; Moore, Arlene A.; Fairbairn, Robert E.
1990-01-01
This paper evaluates the use of Markov chain process in probabilistic life cycle cost analysis and suggests further uses of the process as a design aid tool. A methodology is developed for estimating operations and support cost and expected life for reusable space transportation systems. Application of the methodology is demonstrated for the case of a hypothetical space transportation vehicle. A sensitivity analysis is carried out to explore the effects of uncertainty in key model inputs.
Wang, Yong; Fujii, Takeshi
2011-01-01
It is important in molecular biological analyses to evaluate contamination of co-extracted humic acids in DNA/RNA extracted from soil. We compared the sensitivity of various methods for measurement of humic acids, and influences of DNA/RNA and proteins on the measurement. Considering the results, we give suggestions as to choice of methods for measurement of humic acids in molecular biological analyses.
The Future of the Space Age or how to Evaluate Innovative Ideas
NASA Astrophysics Data System (ADS)
Vollerthun, A.; Fricke, E.
2002-05-01
Based on an initiative of the German Aerospace Industry Association to foster a more transparent and structured funding of German commercial-oriented space projects a three-phased approach is suggested in this paper, to stepwise improve and evaluate proposed concepts for space-related innovations. The objective of this concept was to develop a transparent, structured, and reproducible process to select the right innovative project in terms of political, economical, and technical objectives for funding by e.g. a governmental agency. A stepwise process and related methods, that cover technical as well as economical aspects (and related sensitivities) are proposed. Based on the special needs and requirements of space industry the proposals are compared to a set of predefined top level objectives/requirements. Using an initial trades analysis with the criteria company, technology, product, and market, an initial business case is analyzed. The alternative innovative concepts are in the third process step subject to a very detailed analysis. The full economical and technical scale of the projects is evaluated and metrics for e.g. the 'Return on Investment' or 'Break Even Point' are determined, to compare the various innovations. Risks related to time, cost, and quality are considered, when performing sensitivity analysis by varying the most important factors of the project. Before discussing critical aspects of the proposed process, space-related examples will be presented to show how the process could be applied, and how different concepts should be evaluated.
Commercial test kits for detection of Lyme borreliosis: a meta-analysis of test accuracy
Cook, Michael J; Puri, Basant K
2016-01-01
The clinical diagnosis of Lyme borreliosis can be supported by various test methodologies; test kits are available from many manufacturers. Literature searches were carried out to identify studies that reported characteristics of the test kits. Of 50 searched studies, 18 were included where the tests were commercially available and samples were proven to be positive using serology testing, evidence of an erythema migrans rash, and/or culture. Additional requirements were a test specificity of ≥85% and publication in the last 20 years. The weighted mean sensitivity for all tests and for all samples was 59.5%. Individual study means varied from 30.6% to 86.2%. Sensitivity for each test technology varied from 62.4% for Western blot kits, and 62.3% for enzyme-linked immunosorbent assay tests, to 53.9% for synthetic C6 peptide ELISA tests and 53.7% when the two-tier methodology was used. Test sensitivity increased as dissemination of the pathogen affected different organs; however, the absence of data on the time from infection to serological testing and the lack of standard definitions for “early” and “late” disease prevented analysis of test sensitivity versus time of infection. The lack of standardization of the definitions of disease stage and the possibility of retrospective selection bias prevented clear evaluation of test sensitivity by “stage”. The sensitivity for samples classified as acute disease was 35.4%, with a corresponding sensitivity of 64.5% for samples from patients defined as convalescent. Regression analysis demonstrated an improvement of 4% in test sensitivity over the 20-year study period. The studies did not provide data to indicate the sensitivity of tests used in a clinical setting since the effect of recent use of antibiotics or steroids or other factors affecting antibody response was not factored in. The tests were developed for only specific Borrelia species; sensitivities for other species could not be calculated. PMID:27920571
Koenigkam-Santos, Marcel; Optazaite, Elzbieta; Sommer, Gregor; Safi, Seyer; Heussel, Claus Peter; Kauczor, Hans-Ulrich; Puderbach, Michael
2015-01-01
To propose a technique for evaluation of pulmonary lesions using contrast-enhanced MRI; to assess morphological patterns of enhancement and correlate quantitative analysis with histopathology. Thirty-six patients were prospectively studied. Volumetric-interpolated T1W images were obtained during consecutive breath holds after bolus triggered contrast injection. Volume coverage of first three acquisitions was limited (higher temporal resolution) and last acquisition obtained at 4th min. Two radiologists individually evaluated the patterns of enhancement. Region-of-interest-based signal intensity (SI)-time curves were created to assess quantitative parameters. Readers agreed moderately to substantially concerning lesions' enhancement pattern. SI-time curves could be created for all lesions. In comparison to benign, malignant lesions showed higher values of maximum enhancement, early peak, slope and 4th min enhancement. Early peak >15% showed 100% sensitivity to detect malignancy, maximum enhancement >40% showed 100% specificity. The proposed technique is robust, simple to perform and can be applied in clinical scenario. It allows visual evaluation of enhancement pattern/progression together with creation of SI-time curves and assessment of derived quantitative parameters. Perfusion analysis was highly sensitive to detect malignancy, in accordance to what is recommended by most recent guidelines on imaging evaluation of pulmonary lesions. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Hesler, J. P.; Hwang, Y. C.; Zampini, J. J.
1972-01-01
The fabrication and evaluation of 10 engineering prototype ground signal processing systems of three converter types are reported for use with satellite television. Target cost converters and cost sensitivity analysis are discussed along with the converter configurations.
Cost benefit analysis of anti-strip additives in hot mix asphalt with various aggregates.
DOT National Transportation Integrated Search
2015-05-01
This report documents research on moisture sensitivity testing of hot-mix asphalt (HMA) mixes in Pennsylvania and the : associated use of antistrip. The primary objective of the research was to evaluate and compare benefit/cost ratios of mandatory us...
Biffi, E; Menegon, A; Regalia, G; Maida, S; Ferrigno, G; Pedrocchi, A
2011-08-15
Modern drug discovery for Central Nervous System pathologies has recently focused its attention to in vitro neuronal networks as models for the study of neuronal activities. Micro Electrode Arrays (MEAs), a widely recognized tool for pharmacological investigations, enable the simultaneous study of the spiking activity of discrete regions of a neuronal culture, providing an insight into the dynamics of networks. Taking advantage of MEAs features and making the most of the cross-correlation analysis to assess internal parameters of a neuronal system, we provide an efficient method for the evaluation of comprehensive neuronal network activity. We developed an intra network burst correlation algorithm, we evaluated its sensitivity and we explored its potential use in pharmacological studies. Our results demonstrate the high sensitivity of this algorithm and the efficacy of this methodology in pharmacological dose-response studies, with the advantage of analyzing the effect of drugs on the comprehensive correlative properties of integrated neuronal networks. Copyright © 2011 Elsevier B.V. All rights reserved.
Economic evaluation of algae biodiesel based on meta-analyses
NASA Astrophysics Data System (ADS)
Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.
2017-08-01
The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.
GIS least-cost analysis approach for siting gas pipeline ROWs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sydelko, P.J.; Wilkey, P.L.
1994-09-01
Geographic-information-system applications for the siting and monitoring of gas pipeline rights-of-way (ROWS) were developed for areas near Rio Vista, California. The data layers developed for this project represent geographic features, such as landcover, elevation, aspect, slope, soils, hydrography, transportation corridors, endangered species habitats, wetlands, and public line surveys. A geographic information system was used to develop and store spatial data from several sources; to manipulate spatial data to evaluate environmental and engineering issues associated with the siting, permitting, construction, maintenance, and monitoring of gas-pipeline ROWS; and to graphically display analysis results. Examples of these applications include (1) determination of environmentallymore » sensitive areas, such as endangered species habitat, wetlands, and areas of highly erosive soils; (2) evaluation of engineering constraints, including shallow depth to bedrock, major hydrographic features, and shallow water table; (3) classification of satellite imagery for landuse/landcover that will affect ROWS; and (4) identification of alternative ROW corridors that avoid environmentally sensitive areas or areas with severe engineering constraints.« less
Sironi, Emanuele; Taroni, Franco; Baldinotti, Claudio; Nardi, Cosimo; Norelli, Gian-Aristide; Gallidabino, Matteo; Pinchi, Vilma
2017-11-14
The present study aimed to investigate the performance of a Bayesian method in the evaluation of dental age-related evidence collected by means of a geometrical approximation procedure of the pulp chamber volume. Measurement of this volume was based on three-dimensional cone beam computed tomography images. The Bayesian method was applied by means of a probabilistic graphical model, namely a Bayesian network. Performance of that method was investigated in terms of accuracy and bias of the decisional outcomes. Influence of an informed elicitation of the prior belief of chronological age was also studied by means of a sensitivity analysis. Outcomes in terms of accuracy were adequate with standard requirements for forensic adult age estimation. Findings also indicated that the Bayesian method does not show a particular tendency towards under- or overestimation of the age variable. Outcomes of the sensitivity analysis showed that results on estimation are improved with a ration elicitation of the prior probabilities of age.
Hierarchical Nanogold Labels to Improve the Sensitivity of Lateral Flow Immunoassay
NASA Astrophysics Data System (ADS)
Serebrennikova, Kseniya; Samsonova, Jeanne; Osipov, Alexander
2018-06-01
Lateral flow immunoassay (LFIA) is a widely used express method and offers advantages such as a short analysis time, simplicity of testing and result evaluation. However, an LFIA based on gold nanospheres lacks the desired sensitivity, thereby limiting its wide applications. In this study, spherical nanogold labels along with new types of nanogold labels such as gold nanopopcorns and nanostars were prepared, characterized, and applied for LFIA of model protein antigen procalcitonin. It was found that the label with a structure close to spherical provided more uniform distribution of specific antibodies on its surface, indicative of its suitability for this type of analysis. LFIA using gold nanopopcorns as a label allowed procalcitonin detection over a linear range of 0.5-10 ng mL-1 with the limit of detection of 0.1 ng mL-1, which was fivefold higher than the sensitivity of the assay with gold nanospheres. Another approach to improve the sensitivity of the assay included the silver enhancement method, which was used to compare the amplification of LFIA for procalcitonin detection. The sensitivity of procalcitonin determination by this method was 10 times better the sensitivity of the conventional LFIA with gold nanosphere as a label. The proposed approach of LFIA based on gold nanopopcorns improved the detection sensitivity without additional steps and prevented the increased consumption of specific reagents (antibodies).
Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours.
Garg, Sugandha; Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran
2017-08-01
IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use.
The dream of a one-stop-shop: Meta-analysis on myocardial perfusion CT.
Pelgrim, Gert Jan; Dorrius, Monique; Xie, Xueqian; den Dekker, Martijn A M; Schoepf, U Joseph; Henzler, Thomas; Oudkerk, Matthijs; Vliegenthart, Rozemarijn
2015-12-01
To determine the diagnostic performance of computed tomography (CT) perfusion techniques for the detection of functionally relevant coronary artery disease (CAD) in comparison to reference standards, including invasive coronary angiography (ICA), single photon emission computed tomography (SPECT), and magnetic resonance imaging (MRI). PubMed, Web of Knowledge and Embase were searched from January 1, 1998 until July 1, 2014. The search yielded 9475 articles. After duplicate removal, 6041 were screened on title and abstract. The resulting 276 articles were independently analyzed in full-text by two reviewers, and included if the inclusion criteria were met. The articles reporting diagnostic parameters including true positive, true negative, false positive and false negative were subsequently evaluated for the meta-analysis. Results were pooled according to CT perfusion technique, namely snapshot techniques: single-phase rest, single-phase stress, single-phase dual-energy stress and combined coronary CT angiography [rest] and single-phase stress, as well the dynamic technique: dynamic stress CT perfusion. Twenty-two articles were included in the meta-analysis (1507 subjects). Pooled per-patient sensitivity and specificity of single-phase rest CT compared to rest SPECT were 89% (95% confidence interval [CI], 82-94%) and 88% (95% CI, 78-94%), respectively. Vessel-based sensitivity and specificity of single-phase stress CT compared to ICA-based >70% stenosis were 82% (95% CI, 64-92%) and 78% (95% CI, 61-89%). Segment-based sensitivity and specificity of single-phase dual-energy stress CT in comparison to stress MRI were 75% (95% CI, 60-85%) and 95% (95% CI, 80-99%). Segment-based sensitivity and specificity of dynamic stress CT perfusion compared to stress SPECT were 77% (95% CI, 67-85) and 89% (95% CI, 78-95%). For combined coronary CT angiography and single-phase stress CT, vessel-based sensitivity and specificity in comparison to ICA-based >50% stenosis were 84% (95% CI, 67-93%) and 93% (95% CI, 89-96%). This meta-analysis shows considerable variation in techniques and reference standards for CT of myocardial blood supply. While CT seems sensitive and specific for evaluation of hemodynamically relevant CAD, studies so far are limited in size. Standardization of myocardial perfusion CT technique is essential. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Diagnostic accuracy of physical examination for anterior knee instability: a systematic review.
Leblanc, Marie-Claude; Kowalczuk, Marcin; Andruszkiewicz, Nicole; Simunovic, Nicole; Farrokhyar, Forough; Turnbull, Travis Lee; Debski, Richard E; Ayeni, Olufemi R
2015-10-01
Determining diagnostic accuracy of Lachman, pivot shift and anterior drawer tests versus gold standard diagnosis (magnetic resonance imaging or arthroscopy) for anterior cruciate ligament (ACL) insufficiency cases. Secondarily, evaluating effects of: chronicity, partial rupture, awake versus anaesthetized evaluation. Searching MEDLINE, EMBASE and PubMed identified studies on diagnostic accuracy for ACL insufficiency. Studies identification and data extraction were performed in duplicate. Quality assessment used QUADAS tool, and statistical analyses were completed for pooled sensitivity and specificity. Eight studies were included. Given insufficient data, pooled analysis was only possible for sensitivity on Lachman and pivot shift test. During awake evaluation, sensitivity for the Lachman test was 89 % (95 % CI 0.76, 0.98) for all rupture types, 96 % (95 % CI 0.90, 1.00) for complete ruptures and 68 % (95 % CI 0.25, 0.98) for partial ruptures. For pivot shift in awake evaluation, results were 79 % (95 % CI 0.63, 0.91) for all rupture types, 86 % (95 % CI 0.68, 0.99) for complete ruptures and 67 % (95 % CI 0.47, 0.83) for partial ruptures. Decreased sensitivity of Lachman and pivot shift tests for partial rupture cases and for awake patients raised suspicions regarding the accuracy of these tests for diagnosis of ACL insufficiency. This may lead to further research aiming to improve the understanding of the true accuracy of these physical diagnostic tests and increase the reliability of clinical investigation for this pathology. IV.
Solar energy system economic evaluation: IBM System 4, Clinton, Mississippi
NASA Technical Reports Server (NTRS)
1980-01-01
An economic analysis of the solar energy system was developed for five sites, typical of a wide range of environmental and economic conditions in the continental United States. The analysis was based on the technical and economic models in the F-chart design procedure, with inputs based on the characteristic of the installed system and local conditions. The results are of the economic parameters of present worth of system cost over a 20 year time span: life cycle savings, year of positive savings and year of payback for the optimized solar energy system at each of the analysis sites. The sensitivity of the economic evaluation to uncertainties in constituent system and economic variables is also investigated.
Probabilistic Analysis of Solid Oxide Fuel Cell Based Hybrid Gas Turbine System
NASA Technical Reports Server (NTRS)
Gorla, Rama S. R.; Pai, Shantaram S.; Rusick, Jeffrey J.
2003-01-01
The emergence of fuel cell systems and hybrid fuel cell systems requires the evolution of analysis strategies for evaluating thermodynamic performance. A gas turbine thermodynamic cycle integrated with a fuel cell was computationally simulated and probabilistically evaluated in view of the several uncertainties in the thermodynamic performance parameters. Cumulative distribution functions and sensitivity factors were computed for the overall thermal efficiency and net specific power output due to the uncertainties in the thermodynamic random variables. These results can be used to quickly identify the most critical design variables in order to optimize the design and make it cost effective. The analysis leads to the selection of criteria for gas turbine performance.
Jia, Yongliang; Leung, Siu-wai
2015-11-01
There have been no systematic reviews, let alone meta-analyses, of randomized controlled trials (RCTs) comparing tongxinluo capsule (TXL) and beta-blockers in treating angina pectoris. This study aimed to evaluate the efficacy of TXL and beta-blockers in treating angina pectoris by a meta-analysis of eligible RCTs. The RCTs comparing TXL with beta-blockers (including metoprolol) in treating angina pectoris were searched and retrieved from databases including PubMed, Chinese National Knowledge Infrastructure, and WanFang Data. Eligible RCTs were selected according to prespecified criteria. Meta-analysis was performed on the odds ratios (OR) of symptomatic and electrocardiographic (ECG) improvements after treatment. Subgroup analysis, sensitivity analysis, meta-regression, and publication biases analysis were conducted to evaluate the robustness of the results. Seventy-three RCTs published between 2000 and 2014 with 7424 participants were eligible. Overall ORs comparing TXL with beta-blockers were 3.40 (95% confidence interval [CI], 2.97-3.89; p<0.0001) for symptomatic improvement and 2.63 (95% CI, 2.29-3.02; p<0.0001) for ECG improvement. Subgroup analysis and sensitivity analysis found no statistically significant dependence of overall ORs on specific study characteristics except efficacy criteria. Meta-regression found no significant except sample sizes for data on symptomatic improvement. Publication biases were statistically significant. TXL seems to be more effective than beta-blockers in treating angina pectoris, on the basis of the eligible RCTs. Further RCTs are warranted to reduce publication bias and verify efficacy.
The functional basis of face evaluation
Oosterhof, Nikolaas N.; Todorov, Alexander
2008-01-01
People automatically evaluate faces on multiple trait dimensions, and these evaluations predict important social outcomes, ranging from electoral success to sentencing decisions. Based on behavioral studies and computer modeling, we develop a 2D model of face evaluation. First, using a principal components analysis of trait judgments of emotionally neutral faces, we identify two orthogonal dimensions, valence and dominance, that are sufficient to describe face evaluation and show that these dimensions can be approximated by judgments of trustworthiness and dominance. Second, using a data-driven statistical model for face representation, we build and validate models for representing face trustworthiness and face dominance. Third, using these models, we show that, whereas valence evaluation is more sensitive to features resembling expressions signaling whether the person should be avoided or approached, dominance evaluation is more sensitive to features signaling physical strength/weakness. Fourth, we show that important social judgments, such as threat, can be reproduced as a function of the two orthogonal dimensions of valence and dominance. The findings suggest that face evaluation involves an overgeneralization of adaptive mechanisms for inferring harmful intentions and the ability to cause harm and can account for rapid, yet not necessarily accurate, judgments from faces. PMID:18685089
Grasso, Marina; Boon, Elles M.J.; Filipovic-Sadic, Stela; van Bunderen, Patrick A.; Gennaro, Elena; Cao, Ru; Latham, Gary J.; Hadd, Andrew G.; Coviello, Domenico A.
2015-01-01
Fragile X syndrome and associated disorders are characterized by the number of CGG repeats and methylation status of the FMR1 gene for which Southern blot (SB) historically has been required for analysis. This study describes a simple PCR-only workflow (mPCR) to replace SB analysis, that incorporates novel procedural controls, treatment of the DNA in separate control and methylation-sensitive restriction endonuclease reactions, amplification with labeled primers, and two-color amplicon sizing by capillary electrophoresis. mPCR was evaluated in two independent laboratories with 76 residual clinical samples that represented typical and challenging fragile X alleles in both males and females. mPCR enabled superior size resolution and analytical sensitivity for size and methylation mosaicism compared to SB. Full mutation mosaicism was detected down to 1% in a background of 99% normal allele with 50- to 100-fold less DNA than required for SB. A low level of full mutation mosaicism in one sample was detected using mPCR but not observed using SB. Overall, the sensitivity for detection of full mutation alleles was 100% (95% CI: 89%–100%) with an accuracy of 99% (95% CI: 93%–100%). mPCR analysis of DNA from individuals with Klinefelter and Turner syndromes, and DNA from sperm and blood, were consistent with SB. As such, mPCR enables accurate, sensitive, and standardized methods of FMR1 analysis that can harmonize results across different laboratories. PMID:24177047
A global sensitivity analysis of crop virtual water content
NASA Astrophysics Data System (ADS)
Tamea, S.; Tuninetti, M.; D'Odorico, P.; Laio, F.; Ridolfi, L.
2015-12-01
The concepts of virtual water and water footprint are becoming widely used in the scientific literature and they are proving their usefulness in a number of multidisciplinary contexts. With such growing interest a measure of data reliability (and uncertainty) is becoming pressing but, as of today, assessments of data sensitivity to model parameters, performed at the global scale, are not known. This contribution aims at filling this gap. Starting point of this study is the evaluation of the green and blue virtual water content (VWC) of four staple crops (i.e. wheat, rice, maize, and soybean) at a global high resolution scale. In each grid cell, the crop VWC is given by the ratio between the total crop evapotranspiration over the growing season and the crop actual yield, where evapotranspiration is determined with a detailed daily soil water balance and actual yield is estimated using country-based data, adjusted to account for spatial variability. The model provides estimates of the VWC at a 5x5 arc minutes and it improves on previous works by using the newest available data and including multi-cropping practices in the evaluation. The model is then used as the basis for a sensitivity analysis, in order to evaluate the role of model parameters in affecting the VWC and to understand how uncertainties in input data propagate and impact the VWC accounting. In each cell, small changes are exerted to one parameter at a time, and a sensitivity index is determined as the ratio between the relative change of VWC and the relative change of the input parameter with respect to its reference value. At the global scale, VWC is found to be most sensitive to the planting date, with a positive (direct) or negative (inverse) sensitivity index depending on the typical season of crop planting date. VWC is also markedly dependent on the length of the growing period, with an increase in length always producing an increase of VWC, but with higher spatial variability for rice than for other crops. The sensitivity to the reference evapotranspiration is highly variable with the considered crop and ranges from positive values (for soybean), to negative values (for rice and maize) and near-zero values for wheat. This variability reflects the different yield response factors of crops, which expresses their tolerance to water stress.
Verzotto, Davide; M Teo, Audrey S; Hillmer, Axel M; Nagarajan, Niranjan
2016-01-01
Resolution of complex repeat structures and rearrangements in the assembly and analysis of large eukaryotic genomes is often aided by a combination of high-throughput sequencing and genome-mapping technologies (for example, optical restriction mapping). In particular, mapping technologies can generate sparse maps of large DNA fragments (150 kilo base pairs (kbp) to 2 Mbp) and thus provide a unique source of information for disambiguating complex rearrangements in cancer genomes. Despite their utility, combining high-throughput sequencing and mapping technologies has been challenging because of the lack of efficient and sensitive map-alignment algorithms for robustly aligning error-prone maps to sequences. We introduce a novel seed-and-extend glocal (short for global-local) alignment method, OPTIMA (and a sliding-window extension for overlap alignment, OPTIMA-Overlap), which is the first to create indexes for continuous-valued mapping data while accounting for mapping errors. We also present a novel statistical model, agnostic with respect to technology-dependent error rates, for conservatively evaluating the significance of alignments without relying on expensive permutation-based tests. We show that OPTIMA and OPTIMA-Overlap outperform other state-of-the-art approaches (1.6-2 times more sensitive) and are more efficient (170-200 %) and precise in their alignments (nearly 99 % precision). These advantages are independent of the quality of the data, suggesting that our indexing approach and statistical evaluation are robust, provide improved sensitivity and guarantee high precision.
Voils, Corrine I.; Olsen, Maren K.; Williams, John W.; for the IMPACT Study Investigators
2008-01-01
Objective: To determine whether a subset of depressive symptoms could be identified to facilitate diagnosis of depression in older adults in primary care. Method: Secondary analysis was conducted on 898 participants aged 60 years or older with major depressive disorder and/or dysthymic disorder (according to DSM-IV criteria) who participated in the Improving Mood–Promoting Access to Collaborative Treatment (IMPACT) study, a multisite, randomized trial of collaborative care for depression (recruitment from July 1999 to August 2001). Linear regression was used to identify a core subset of depressive symptoms associated with decreased social, physical, and mental functioning. The sensitivity and specificity, adjusting for selection bias, were evaluated for these symptoms. The sensitivity and specificity of a second subset of 4 depressive symptoms previously validated in a midlife sample was also evaluated. Results: Psychomotor changes, fatigue, and suicidal ideation were associated with decreased functioning and served as the core set of symptoms. Adjusting for selection bias, the sensitivity of these 3 symptoms was 0.012 and specificity 0.994. The sensitivity of the 4 symptoms previously validated in a midlife sample was 0.019 and specificity was 0.997. Conclusion: We identified 3 depression symptoms that were highly specific for major depressive disorder in older adults. However, these symptoms and a previously identified subset were too insensitive for accurate diagnosis. Therefore, we recommend a full assessment of DSM-IV depression criteria for accurate diagnosis. PMID:18311416
Chen, Yun; Yang, Hui
2013-01-01
Heart rate variability (HRV) analysis has emerged as an important research topic to evaluate autonomic cardiac function. However, traditional time and frequency-domain analysis characterizes and quantify only linear and stationary phenomena. In the present investigation, we made a comparative analysis of three alternative approaches (i.e., wavelet multifractal analysis, Lyapunov exponents and multiscale entropy analysis) for quantifying nonlinear dynamics in heart rate time series. Note that these extracted nonlinear features provide information about nonlinear scaling behaviors and the complexity of cardiac systems. To evaluate the performance, we used 24-hour HRV recordings from 54 healthy subjects and 29 heart failure patients, available in PhysioNet. Three nonlinear methods are evaluated not only individually but also in combination using three classification algorithms, i.e., linear discriminate analysis, quadratic discriminate analysis and k-nearest neighbors. Experimental results show that three nonlinear methods capture nonlinear dynamics from different perspectives and the combined feature set achieves the best performance, i.e., sensitivity 97.7% and specificity 91.5%. Collectively, nonlinear HRV features are shown to have the promise to identify the disorders in autonomic cardiovascular function.
Bawankar, Pritam; Shanbhag, Nita; K., S. Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus. PMID:29281690
Bawankar, Pritam; Shanbhag, Nita; K, S Smitha; Dhawan, Bodhraj; Palsule, Aratee; Kumar, Devesh; Chandel, Shailja; Sood, Suneet
2017-01-01
Diabetic retinopathy (DR) is a leading cause of blindness among working-age adults. Early diagnosis through effective screening programs is likely to improve vision outcomes. The ETDRS seven-standard-field 35-mm stereoscopic color retinal imaging (ETDRS) of the dilated eye is elaborate and requires mydriasis, and is unsuitable for screening. We evaluated an image analysis application for the automated diagnosis of DR from non-mydriatic single-field images. Patients suffering from diabetes for at least 5 years were included if they were 18 years or older. Patients already diagnosed with DR were excluded. Physiologic mydriasis was achieved by placing the subjects in a dark room. Images were captured using a Bosch Mobile Eye Care fundus camera. The images were analyzed by the Retinal Imaging Bosch DR Algorithm for the diagnosis of DR. All subjects also subsequently underwent pharmacological mydriasis and ETDRS imaging. Non-mydriatic and mydriatic images were read by ophthalmologists. The ETDRS readings were used as the gold standard for calculating the sensitivity and specificity for the software. 564 consecutive subjects (1128 eyes) were recruited from six centers in India. Each subject was evaluated at a single outpatient visit. Forty-four of 1128 images (3.9%) could not be read by the algorithm, and were categorized as inconclusive. In four subjects, neither eye provided an acceptable image: these four subjects were excluded from the analysis. This left 560 subjects for analysis (1084 eyes). The algorithm correctly diagnosed 531 of 560 cases. The sensitivity, specificity, and positive and negative predictive values were 91%, 97%, 94%, and 95% respectively. The Bosch DR Algorithm shows favorable sensitivity and specificity in diagnosing DR from non-mydriatic images, and can greatly simplify screening for DR. This also has major implications for telemedicine in the use of screening for retinopathy in patients with diabetes mellitus.
Simplifiying global biogeochemistry models to evaluate methane emissions
NASA Astrophysics Data System (ADS)
Gerber, S.; Alonso-Contes, C.
2017-12-01
Process-based models are important tools to quantify wetland methane emissions, particularly also under climate change scenarios, evaluating these models is often cumbersome as they are embedded in larger land-surface models where fluctuating water table and the carbon cycle (including new readily decomposable plant material) are predicted variables. Here, we build on these large scale models but instead of modeling water table and plant productivity we provide values as boundary conditions. In contrast, aerobic and anaerobic decomposition, as well as soil column transport of oxygen and methane are predicted by the model. Because of these simplifications, the model has the potential to be more readily adaptable to the analysis of field-scale data. Here we determine the sensitivity of the model to specific setups, parameter choices, and to boundary conditions in order to determine set-up needs and inform what critical auxiliary variables need to be measured in order to better predict field-scale methane emissions from wetland soils. To that end we performed a global sensitivity analysis that also considers non-linear interactions between processes. The global sensitivity analysis revealed, not surprisingly, that water table dynamics (both mean level and amplitude of fluctuations), and the rate of the carbon cycle (i.e. net primary productivity) are critical determinants of methane emissions. The depth-scale where most of the potential decomposition occurs also affects methane emissions. Different transport mechanisms are compensating each other to some degree: If plant conduits are constrained, methane emissions by diffusive flux and ebullition compensate to some degree, however annual emissions are higher when plants help to bypass methanotrophs in temporally unsaturated upper layers. Finally, while oxygen consumption by plant roots help creating anoxic conditions it has little effect on overall methane emission. Our initial sensitivity analysis helps guiding further model development and improvement. However, an important goal for our model is to use it in field settings as a tool to deconvolve the different processes that contribute to the net transfer of methane from soils to atmosphere.
Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P
2017-12-01
Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.
NASA Astrophysics Data System (ADS)
Rohmer, Jeremy
2016-04-01
Predicting the temporal evolution of landslides is typically supported by numerical modelling. Dynamic sensitivity analysis aims at assessing the influence of the landslide properties on the time-dependent predictions (e.g., time series of landslide displacements). Yet two major difficulties arise: 1. Global sensitivity analysis require running the landslide model a high number of times (> 1000), which may become impracticable when the landslide model has a high computation time cost (> several hours); 2. Landslide model outputs are not scalar, but function of time, i.e. they are n-dimensional vectors with n usually ranging from 100 to 1000. In this article, I explore the use of a basis set expansion, such as principal component analysis, to reduce the output dimensionality to a few components, each of them being interpreted as a dominant mode of variation in the overall structure of the temporal evolution. The computationally intensive calculation of the Sobol' indices for each of these components are then achieved through meta-modelling, i.e. by replacing the landslide model by a "costless-to-evaluate" approximation (e.g., a projection pursuit regression model). The methodology combining "basis set expansion - meta-model - Sobol' indices" is then applied to the La Frasse landslide to investigate the dynamic sensitivity analysis of the surface horizontal displacements to the slip surface properties during the pore pressure changes. I show how to extract information on the sensitivity of each main modes of temporal behaviour using a limited number (a few tens) of long running simulations. In particular, I identify the parameters, which trigger the occurrence of a turning point marking a shift between a regime of low values of landslide displacements and one of high values.
Vorob'ev, A A; Mironov, A Iu; Istratov, V G; Osman, K A
2005-01-01
In order to estimate possibility of chromatographic criteria application in diagnostics of syphilitic infection, the authors researched in three directions: evaluation of dysmetabolism of carbohydrate, lipid and amino acid components in patients' serum; search of signal compounds that serve as microbe "cooperative sensitivity" activators (lactones, quinolones, furan boron ethers) with evaluation of the risk of luetic infection generalization; indication of organ lesion markers (the brain, liver, bone structures). The factor analysis performed by the researchers allowed to determine priorities of clinical, laboratory and chromatographic data according to their value. The diagnostic value of various diagnostic chromatographic criteria was estimated as follows: the diagnostic sensitivity was 79.6% to 97.2%, the diagnostic specificity was 50.0% to 92.7%, the positive diagnostic predictability was 61.8% to 94.7% and the negative diagnostic predictability was 60.9% to 95.1%.
Analysis of Composite Panels Subjected to Thermo-Mechanical Loads
NASA Technical Reports Server (NTRS)
Noor, Ahmed K.; Peters, Jeanne M.
1999-01-01
The results of a detailed study of the effect of cutout on the nonlinear response of curved unstiffened panels are presented. The panels are subjected to combined temperature gradient through-the-thickness combined with pressure loading and edge shortening or edge shear. The analysis is based on a first-order, shear deformation, Sanders-Budiansky-type shell theory with the effects of large displacements, moderate rotations, transverse shear deformation, and laminated anisotropic material behavior included. A mixed formulation is used with the fundamental unknowns consisting of the generalized displacements and the stress resultants of the panel. The nonlinear displacements, strain energy, principal strains, transverse shear stresses, transverse shear strain energy density, and their hierarchical sensitivity coefficients are evaluated. The hierarchical sensitivity coefficients measure the sensitivity of the nonlinear response to variations in the panel parameters, as well as in the material properties of the individual layers. Numerical results are presented for cylindrical panels and show the effects of variations in the loading and the size of the cutout on the global and local response quantities as well as their sensitivity to changes in the various panel, layer, and micromechanical parameters.
Can Automated Imaging for Optic Disc and Retinal Nerve Fiber Layer Analysis Aid Glaucoma Detection?
Banister, Katie; Boachie, Charles; Bourne, Rupert; Cook, Jonathan; Burr, Jennifer M; Ramsay, Craig; Garway-Heath, David; Gray, Joanne; McMeekin, Peter; Hernández, Rodolfo; Azuara-Blanco, Augusto
2016-05-01
To compare the diagnostic performance of automated imaging for glaucoma. Prospective, direct comparison study. Adults with suspected glaucoma or ocular hypertension referred to hospital eye services in the United Kingdom. We evaluated 4 automated imaging test algorithms: the Heidelberg Retinal Tomography (HRT; Heidelberg Engineering, Heidelberg, Germany) glaucoma probability score (GPS), the HRT Moorfields regression analysis (MRA), scanning laser polarimetry (GDx enhanced corneal compensation; Glaucoma Diagnostics (GDx), Carl Zeiss Meditec, Dublin, CA) nerve fiber indicator (NFI), and Spectralis optical coherence tomography (OCT; Heidelberg Engineering) retinal nerve fiber layer (RNFL) classification. We defined abnormal tests as an automated classification of outside normal limits for HRT and OCT or NFI ≥ 56 (GDx). We conducted a sensitivity analysis, using borderline abnormal image classifications. The reference standard was clinical diagnosis by a masked glaucoma expert including standardized clinical assessment and automated perimetry. We analyzed 1 eye per patient (the one with more advanced disease). We also evaluated the performance according to severity and using a combination of 2 technologies. Sensitivity and specificity, likelihood ratios, diagnostic, odds ratio, and proportion of indeterminate tests. We recruited 955 participants, and 943 were included in the analysis. The average age was 60.5 years (standard deviation, 13.8 years); 51.1% were women. Glaucoma was diagnosed in at least 1 eye in 16.8%; 32% of participants had no glaucoma-related findings. The HRT MRA had the highest sensitivity (87.0%; 95% confidence interval [CI], 80.2%-92.1%), but lowest specificity (63.9%; 95% CI, 60.2%-67.4%); GDx had the lowest sensitivity (35.1%; 95% CI, 27.0%-43.8%), but the highest specificity (97.2%; 95% CI, 95.6%-98.3%). The HRT GPS sensitivity was 81.5% (95% CI, 73.9%-87.6%), and specificity was 67.7% (95% CI, 64.2%-71.2%); OCT sensitivity was 76.9% (95% CI, 69.2%-83.4%), and specificity was 78.5% (95% CI, 75.4%-81.4%). Including only eyes with severe glaucoma, sensitivity increased: HRT MRA, HRT GPS, and OCT would miss 5% of eyes, and GDx would miss 21% of eyes. A combination of 2 different tests did not improve the accuracy substantially. Automated imaging technologies can aid clinicians in diagnosing glaucoma, but may not replace current strategies because they can miss some cases of severe glaucoma. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Development and evaluation of the INSPIRE measure of staff support for personal recovery.
Williams, Julie; Leamy, Mary; Bird, Victoria; Le Boutillier, Clair; Norton, Sam; Pesola, Francesca; Slade, Mike
2015-05-01
No individualised standardised measure of staff support for mental health recovery exists. To develop and evaluate a measure of staff support for recovery. initial draft of measure based on systematic review of recovery processes; consultation (n = 61); and piloting (n = 20). Psychometric evaluation: three rounds of data collection from mental health service users (n = 92). INSPIRE has two sub-scales. The 20-item Support sub-scale has convergent validity (0.60) and adequate sensitivity to change. Exploratory factor analysis (variance 71.4-85.1 %, Kaiser-Meyer-Olkin 0.65-0.78) and internal consistency (range 0.82-0.85) indicate each recovery domain is adequately assessed. The 7-item Relationship sub-scale has convergent validity 0.69, test-retest reliability 0.75, internal consistency 0.89, a one-factor solution (variance 70.5 %, KMO 0.84) and adequate sensitivity to change. A 5-item Brief INSPIRE was also evaluated. INSPIRE and Brief INSPIRE demonstrate adequate psychometric properties, and can be recommended for research and clinical use.
Jia, Yongliang; Zhang, Shikai; Huang, Fangyi; Leung, Siu-wai
2012-06-01
Ginseng-based medicines and nitrates are commonly used in treating ischemic heart disease (IHD) angina pectoris in China. Hundreds of randomized controlled trials (RCTs) reported in Chinese language claimed that ginseng-based medicines can relieve the symptoms of IHD. This study provides the first PRISMA-compliant systematic review with sensitivity and subgroup analyses to evaluate the RCTs comparing the efficacies of ginseng-based medicines and nitrates in treating ischemic heart disease, particularly angina pectoris. Past RCTs published up to 2010 on ginseng versus nitrates in treating IHD for 14 or more days were retrieved from major English and Chinese databases, including PubMed, Science Direct, Cochrane Library, WangFang Data, and Chinese National Knowledge Infrastructure. The qualities of included RCTs were assessed with Jadad scale, a refined Jadad scale called M scale, CONSORT 2010 checklist, and Cochrane risk of bias tool. Meta-analysis was performed on the primary outcomes including the improvement of symptoms and electrocardiography (ECG). Subgroup analysis, sensitivity analysis, and meta-regression were performed to evaluate the effects of study characteristics of RCTs, including quality, follow-up periods, and efficacy definitions on the overall effect size of ginseng. Eighteen RCTs with 1549 participants were included. Overall odds ratios for comparing ginseng-based medicines with nitrates were 3.00 (95% CI: 2.27-3.96) in symptom improvement (n=18) and 1.61 (95% CI: 1.20-2.15) in ECG improvement (n=10). Subgroup analysis, sensitivity analysis, and meta-regression found no significant difference in overall effects among all study characteristics, indicating that the overall effects were stable. The meta-analysis of 18 eligible RCTs demonstrates moderate evidence that ginseng is more effective than nitrates for treating angina pectoris. However, further RCTs for higher quality, longer follow-up periods, lager sample size, multi-center/country, and are still required to verify the efficacy. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
Normative Data for an Instrumental Assessment of the Upper-Limb Functionality.
Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco
2015-01-01
Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment.
Normative Data for an Instrumental Assessment of the Upper-Limb Functionality
Caimmi, Marco; Guanziroli, Eleonora; Malosio, Matteo; Pedrocchi, Nicola; Vicentini, Federico; Molinari Tosatti, Lorenzo; Molteni, Franco
2015-01-01
Upper-limb movement analysis is important to monitor objectively rehabilitation interventions, contributing to improving the overall treatments outcomes. Simple, fast, easy-to-use, and applicable methods are required to allow routinely functional evaluation of patients with different pathologies and clinical conditions. This paper describes the Reaching and Hand-to-Mouth Evaluation Method, a fast procedure to assess the upper-limb motor control and functional ability, providing a set of normative data from 42 healthy subjects of different ages, evaluated for both the dominant and the nondominant limb motor performance. Sixteen of them were reevaluated after two weeks to perform test-retest reliability analysis. Data were clustered into three subgroups of different ages to test the method sensitivity to motor control differences. Experimental data show notable test-retest reliability in all tasks. Data from older and younger subjects show significant differences in the measures related to the ability for coordination thus showing the high sensitivity of the method to motor control differences. The presented method, provided with control data from healthy subjects, appears to be a suitable and reliable tool for the upper-limb functional assessment in the clinical environment. PMID:26539500
A comprehensive prediction and evaluation method of pilot workload
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
BACKGROUND: The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. OBJECTIVE: A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. METHODS: The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. RESULTS: Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. CONCLUSION: A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%. PMID:29710742
A comprehensive prediction and evaluation method of pilot workload.
Feng, Chuanyan; Wanyan, Xiaoru; Yang, Kun; Zhuang, Damin; Wu, Xu
2018-01-01
The prediction and evaluation of pilot workload is a key problem in human factor airworthiness of cockpit. A pilot traffic pattern task was designed in a flight simulation environment in order to carry out the pilot workload prediction and improve the evaluation method. The prediction of typical flight subtasks and dynamic workloads (cruise, approach, and landing) were built up based on multiple resource theory, and a favorable validity was achieved by the correlation analysis verification between sensitive physiological data and the predicted value. Statistical analysis indicated that eye movement indices (fixation frequency, mean fixation time, saccade frequency, mean saccade time, and mean pupil diameter), Electrocardiogram indices (mean normal-to-normal interval and the ratio between low frequency and sum of low frequency and high frequency), and Electrodermal Activity indices (mean tonic and mean phasic) were all sensitive to typical workloads of subjects. A multinominal logistic regression model based on combination of physiological indices (fixation frequency, mean normal-to-normal interval, the ratio between low frequency and sum of low frequency and high frequency, and mean tonic) was constructed, and the discriminate accuracy was comparatively ideal with a rate of 84.85%.
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, R. R.
1989-01-01
A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
Thiruppathiraja, Chinnasamy; Kamatchiammal, Senthilkumar; Adaikkappan, Periyakaruppan; Santhosh, Devakirubakaran Jayakar; Alagar, Muthukaruppan
2011-10-01
The present study was aimed at the development and evaluation of a DNA electrochemical biosensor for Mycobacterium sp. genomic DNA detection in a clinical specimen using a signal amplifier as dual-labeled AuNPs. The DNA electrochemical biosensors were fabricated using a sandwich detection strategy involving two kinds of DNA probes specific to Mycobacterium sp. genomic DNA. The probes of enzyme ALP and the detector probe both conjugated on the AuNPs and subsequently hybridized with target DNA immobilized in a SAM/ITO electrode followed by characterization with CV, EIS, and DPV analysis using the electroactive species para-nitrophenol generated by ALP through hydrolysis of para-nitrophenol phosphate. The effect of enhanced sensitivity was obtained due to the AuNPs carrying numerous ALPs per hybridization and a detection limit of 1.25 ng/ml genomic DNA was determined under optimized conditions. The dual-labeled AuNP-facilitated electrochemical sensor was also evaluated by clinical sputum samples, showing a higher sensitivity and specificity and the outcome was in agreement with the PCR analysis. In conclusion, the developed electrochemical sensor demonstrated unique sensitivity and specificity for both genomic DNA and sputum samples and can be employed as a regular diagnostics tool for Mycobacterium sp. monitoring in clinical samples. Copyright © 2011 Elsevier Inc. All rights reserved.
Role of multidetector computed tomography in evaluating incidentally detected breast lesions.
Moschetta, Marco; Scardapane, Arnaldo; Lorusso, Valentina; Rella, Leonarda; Telegrafo, Michele; Serio, Gabriella; Angelelli, Giuseppe; Ianora, Amato Antonio Stabile
2015-01-01
Computed tomography (CT) does not represent the primary method for the evaluation of breast lesions; however, it can detect breast abnormalities, even when performed for other reasons related to thoracic structures. The aim of this study is to evaluate the potential benefits of 320-row multidetector CT (MDCT) in evaluating and differentiating incidentally detected breast lesions by using vessel probe and 3D analysis software with net enhancement value. Sixty-two breast lesions in 46 patients who underwent 320-row chest CT examination were retrospectively evaluated. CT scans were assessed searching for the presence, location, number, morphological features, and density of breast nodules. Net enhancement was calculated by subtracting precontrast density from the density obtained by postcontrast values. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy of CT were calculated for morphological features and net enhancement. Thirty of 62 lesions were found to be malignant at histological examination and 32 were found to be benign. When morphological features were considered, the sensitivity, specificity, accuracy, PPV, and NPV of CT were 87%, 100%, 88%, 100%, and 50%, respectively. Based on net enhancement, CT reached a sensitivity, specificity, accuracy, PPV, and NPV of 100%, 94%, 97%, 94%, and 100%, respectively. MDCT allows to recognize and characterize breast lesions based on morphological features. Net enhancement can be proposed as an additional accurate feature of CT.
Zhang, He; Hou, Chang; Zhou, Zhi; Zhang, Hao; Zhou, Gen; Zhang, Gui
2014-01-01
The diagnostic performance of 64-detector computed tomographic angiography (CTA) for detection of small intracranial aneurysms (SIAs) was evaluated. In this prospective study, 112 consecutive patients underwent 64-detector CTA before volume-rendering rotation digital subtraction angiography (VR-RDSA) or surgery. VR-RDSA or intraoperative findings or both were used as the gold standards. The accuracy, sensitivity, specificity, and positive predictive values (PPV) and negative predictive values (NPV), as measures to detect or rule out SIAs, were determined by patient-based and aneurysm size-based evaluations. The reference standard methods revealed 84 small aneurysms in 71 patients. The results of patient-based 64-detector CTA evaluation for SIAs were: accuracy, 98.2%; sensitivity, 98.6%; specificity, 97.6%; PPV, 98.6%; and NPV, 97.6%. The aneurysm-based evaluation results were: accuracy, 96.8%; sensitivity, 97.6%; specificity, 95.1%; PPV, 97.6%; and NPV, 95.1%. Two false-positive and two false-negative findings for aneurysms <3 mm in size occurred in the 64-detector CTA analysis. The diagnostic performance of 64-detector CTA did not improve much compared with 16-detector CTA for detecting SIAs, especially for very small aneurysms. VR-RDSA is still necessary for patients with a history of subarachnoid hemorrhage if the CTA findings are negative. Copyright © 2012 by the American Society of Neuroimaging.
Flores-Alsina, Xavier; Rodriguez-Roda, Ignasi; Sin, Gürkan; Gernaey, Krist V
2009-01-01
The objective of this paper is to perform an uncertainty and sensitivity analysis of the predictions of the Benchmark Simulation Model (BSM) No. 1, when comparing four activated sludge control strategies. The Monte Carlo simulation technique is used to evaluate the uncertainty in the BSM1 predictions, considering the ASM1 bio-kinetic parameters and influent fractions as input uncertainties while the Effluent Quality Index (EQI) and the Operating Cost Index (OCI) are focused on as model outputs. The resulting Monte Carlo simulations are presented using descriptive statistics indicating the degree of uncertainty in the predicted EQI and OCI. Next, the Standard Regression Coefficients (SRC) method is used for sensitivity analysis to identify which input parameters influence the uncertainty in the EQI predictions the most. The results show that control strategies including an ammonium (S(NH)) controller reduce uncertainty in both overall pollution removal and effluent total Kjeldahl nitrogen. Also, control strategies with an external carbon source reduce the effluent nitrate (S(NO)) uncertainty increasing both their economical cost and variability as a trade-off. Finally, the maximum specific autotrophic growth rate (micro(A)) causes most of the variance in the effluent for all the evaluated control strategies. The influence of denitrification related parameters, e.g. eta(g) (anoxic growth rate correction factor) and eta(h) (anoxic hydrolysis rate correction factor), becomes less important when a S(NO) controller manipulating an external carbon source addition is implemented.
Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin
2014-03-01
To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
Tucker, Robin M; Kaiser, Kathryn A; Parman, Mariel A; George, Brandon J; Allison, David B; Mattes, Richard D
2017-01-01
Given the increasing evidence that supports the ability of humans to taste non-esterified fatty acids (NEFA), recent studies have sought to determine if relationships exist between oral sensitivity to NEFA (measured as thresholds), food intake and obesity. Published findings suggest there is either no association or an inverse association. A systematic review and meta-analysis was conducted to determine if differences in fatty acid taste sensitivity or intensity ratings exist between individuals who are lean or obese. A total of 7 studies that reported measurement of taste sensations to non-esterified fatty acids by psychophysical methods (e.g.,studies using model systems rather than foods, detection thresholds as measured by a 3-alternative forced choice ascending methodology were included in the meta-analysis. Two other studies that measured intensity ratings to graded suprathreshold NEFA concentrations were evaluated qualitatively. No significant differences in fatty acid taste thresholds or intensity were observed. Thus, differences in fatty acid taste sensitivity do not appear to precede or result from obesity.
Model-based POD study of manual ultrasound inspection and sensitivity analysis using metamodel
NASA Astrophysics Data System (ADS)
Ribay, Guillemette; Artusi, Xavier; Jenson, Frédéric; Reece, Christopher; Lhuillier, Pierre-Emile
2016-02-01
The reliability of NDE can be quantified by using the Probability of Detection (POD) approach. Former studies have shown the potential of the model-assisted POD (MAPOD) approach to replace expensive experimental determination of POD curves. In this paper, we make use of CIVA software to determine POD curves for a manual ultrasonic inspection of a heavy component, for which a whole experimental POD campaign was not available. The influential parameters were determined by expert analysis. The semi-analytical models used in CIVA for wave propagation and beam-defect interaction have been validated in the range of variation of the influential parameters by comparison with finite element modelling (Athena). The POD curves are computed for « hit/miss » and « â versus a » analysis. The verification of Berens hypothesis is evaluated by statistical tools. A sensitivity study is performed to measure the relative influence of parameters on the defect response amplitude variance, using the Sobol sensitivity index. A meta-model is also built to reduce computing cost and enhance the precision of estimated index.
ANALYSIS OF NASAL TISSUE FOR BIOMARKERS OF CHLORINE EXPOSURE
Both 3-chloro-tyrosine (CT) and 3,5-dichloro-tyrosine (dCT) are sensitive and specific biomarkers for evaluating exposure to chlorine gas (Cl2) and hypochlorous acid (HOCl). Previous investigations have focused on the formation of CT and dCT resulting from biochemical responses ...
EVALUATING THE COSTS OF PACKED-TOWER AERATION AND GAC FOR CONTROLLING SELECTED ORGANICS
This article focuses on a preliminary cost analysis that compares liquid-phase granular activated carbon (GAC) treatment with packed-tower aeration (PTA) treatment, with and without air emissions control. The sensitivity of cost to design and operating variables is also discussed...
Kovalska, M P; Bürki, E; Schoetzau, A; Orguel, S F; Orguel, S; Grieshaber, M C
2011-04-01
The distinction of real progression from test variability in visual field (VF) series may be based on clinical judgment, on trend analysis based on follow-up of test parameters over time, or on identification of a significant change related to the mean of baseline exams (event analysis). The aim of this study was to compare a new population-based method (Octopus field analysis, OFA) with classic regression analyses and clinical judgment for detecting glaucomatous VF changes. 240 VF series of 240 patients with at least 9 consecutive examinations available were included into this study. They were independently classified by two experienced investigators. The results of such a classification served as a reference for comparison for the following statistical tests: (a) t-test global, (b) r-test global, (c) regression analysis of 10 VF clusters and (d) point-wise linear regression analysis. 32.5 % of the VF series were classified as progressive by the investigators. The sensitivity and specificity were 89.7 % and 92.0 % for r-test, and 73.1 % and 93.8 % for the t-test, respectively. In the point-wise linear regression analysis, the specificity was comparable (89.5 % versus 92 %), but the sensitivity was clearly lower than in the r-test (22.4 % versus 89.7 %) at a significance level of p = 0.01. A regression analysis for the 10 VF clusters showed a markedly higher sensitivity for the r-test (37.7 %) than the t-test (14.1 %) at a similar specificity (88.3 % versus 93.8 %) for a significant trend (p = 0.005). In regard to the cluster distribution, the paracentral clusters and the superior nasal hemifield progressed most frequently. The population-based regression analysis seems to be superior to the trend analysis in detecting VF progression in glaucoma, and may eliminate the drawbacks of the event analysis. Further, it may assist the clinician in the evaluation of VF series and may allow better visualization of the correlation between function and structure owing to VF clusters. © Georg Thieme Verlag KG Stuttgart · New York.
Shahinfar, Saleh; Guenther, Jerry N; Page, C David; Kalantari, Afshin S; Cabrera, Victor E; Fricke, Paul M; Weigel, Kent A
2015-06-01
The common practice on most commercial dairy farms is to inseminate all cows that are eligible for breeding, while ignoring (or absorbing) the costs associated with semen and labor directed toward low-fertility cows that are unlikely to conceive. Modern analytical methods, such as machine learning algorithms, can be applied to cow-specific explanatory variables for the purpose of computing probabilities of success or failure associated with upcoming insemination events. Lift chart analysis can identify subsets of high fertility cows that are likely to conceive and are therefore appropriate targets for insemination (e.g., with conventional artificial insemination semen or expensive sex-enhanced semen), as well as subsets of low-fertility cows that are unlikely to conceive and should therefore be passed over at that point in time. Although such a strategy might be economically viable, the management, environmental, and financial conditions on one farm might differ widely from conditions on the next, and hence the reproductive management recommendations derived from such a tool may be suboptimal for specific farms. When coupled with cost-sensitive evaluation of misclassified and correctly classified insemination events, the strategy can be a potentially powerful tool for optimizing the reproductive management of individual farms. In the present study, lift chart analysis and cost-sensitive evaluation were applied to a data set consisting of 54,806 insemination events of primiparous Holstein cows on 26 Wisconsin farms, as well as a data set with 17,197 insemination events of primiparous Holstein cows on 3 Wisconsin farms, where the latter had more detailed information regarding health events of individual cows. In the first data set, the gains in profit associated with limiting inseminations to subsets of 79 to 97% of the most fertile eligible cows ranged from $0.44 to $2.18 per eligible cow in a monthly breeding period, depending on days in milk at breeding and milk yield relative to contemporaries. In the second data set, the strategy of inseminating only a subset consisting of 59% of the most fertile cows conferred a gain in profit of $5.21 per eligible cow in a monthly breeding period. These results suggest that, when used with a powerful classification algorithm, lift chart analysis and cost-sensitive evaluation of correctly classified and misclassified insemination events can enhance the performance and profitability of reproductive management programs on commercial dairy farms. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Wang, Lina; Li, Hao; Yang, Zhongyuan; Guo, Zhuming; Zhang, Quan
2015-07-01
This study was designed to assess the efficiency of the serum thyrotropin to thyroglobulin ratio for thyroid nodule evaluation in euthyroid patients. Cross-sectional study. Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China. Retrospective analysis was performed for 400 previously untreated cases presenting with thyroid nodules. Thyroid function was tested with commercially available radioimmunoassays. The receiver operating characteristic curves were constructed to determine cutoff values. The efficacy of the thyrotropin:thyroglobulin ratio and thyroid-stimulating hormone for thyroid nodule evaluation was evaluated in terms of sensitivity, specificity, positive predictive value, positive likelihood ratio, negative likelihood ratio, and odds ratio. In receiver operating characteristic curve analysis, the area under the curve was 0.746 for the thyrotropin:thyroglobulin ratio and 0.659 for thyroid-stimulating hormone. With a cutoff point value of 24.97 IU/g for the thyrotropin:thyroglobulin ratio, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 78.9%, 60.8%, 75.5%, 2.01, and 0.35, respectively. The odds ratio for the thyrotropin:thyroglobulin ratio indicating malignancy was 5.80. With a cutoff point value of 1.525 µIU/mL for thyroid-stimulating hormone, the sensitivity, specificity, positive predictive value, positive likelihood ratio, and negative likelihood ratio were 74.0%, 53.2%, 70.8%, 1.58, and 0.49, respectively. The odds ratio indicating malignancy for thyroid-stimulating hormone was 3.23. Increasing preoperative serum thyrotropin:thyroglobulin ratio is a risk factor for thyroid carcinoma, and the correlation of the thyrotropin:thyroglobulin ratio to malignancy is higher than that for serum thyroid-stimulating hormone. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
2015-01-01
The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707
Pires, RES; Pereira, AA; Abreu-e-Silva, GM; Labronici, PJ; Figueiredo, LB; Godoy-Santos, AL; Kfuri, M
2014-01-01
Background: Foot and ankle injuries are frequent in emergency departments. Although only a few patients with foot and ankle sprain present fractures and the fracture patterns are almost always simple, lack of fracture diagnosis can lead to poor functional outcomes. Aim: The present study aims to evaluate the reliability of the Ottawa ankle rules and the orthopedic surgeon subjective perception to assess foot and ankle fractures after sprains. Subjects and Methods: A cross-sectional study was conducted from July 2012 to December 2012. Ethical approval was granted. Two hundred seventy-four adult patients admitted to the emergency department with foot and/or ankle sprain were evaluated by an orthopedic surgeon who completed a questionnaire prior to radiographic assessment. The Ottawa ankle rules and subjective perception of foot and/or ankle fractures were evaluated on the questionnaire. Results: Thirteen percent (36/274) patients presented fracture. Orthopedic surgeon subjective analysis showed 55.6% sensitivity, 90.1% specificity, 46.5% positive predictive value and 92.9% negative predictive value. The general orthopedic surgeon opinion accuracy was 85.4%. The Ottawa ankle rules presented 97.2% sensitivity, 7.8% specificity, 13.9% positive predictive value, 95% negative predictive value and 19.9% accuracy respectively. Weight-bearing inability was the Ottawa ankle rule item that presented the highest reliability, 69.4% sensitivity, 61.6% specificity, 63.1% accuracy, 21.9% positive predictive value and 93% negative predictive value respectively. Conclusion: The Ottawa ankle rules showed high reliability for deciding when to take radiographs in foot and/or ankle sprains. Weight-bearing inability was the most important isolated item to predict fracture presence. Orthopedic surgeon subjective analysis to predict fracture possibility showed a high specificity rate, representing a confident method to exclude unnecessary radiographic exams. PMID:24971221
Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho
2017-08-01
The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.
de Ruiter, C M; van der Veer, C; Leeflang, M M G; Deborggraeve, S; Lucas, C; Adams, E R
2014-09-01
Molecular methods have been proposed as highly sensitive tools for the detection of Leishmania parasites in visceral leishmaniasis (VL) patients. Here, we evaluate the diagnostic accuracy of these tools in a meta-analysis of the published literature. The selection criteria were original studies that evaluate the sensitivities and specificities of molecular tests for diagnosis of VL, adequate classification of study participants, and the absolute numbers of true positives and negatives derivable from the data presented. Forty studies met the selection criteria, including PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), and loop-mediated isothermal amplification (LAMP). The sensitivities of the individual studies ranged from 29 to 100%, and the specificities ranged from 25 to 100%. The pooled sensitivity of PCR in whole blood was 93.1% (95% confidence interval [CI], 90.0 to 95.2), and the specificity was 95.6% (95% CI, 87.0 to 98.6). The specificity was significantly lower in consecutive studies, at 63.3% (95% CI, 53.9 to 71.8), due either to true-positive patients not being identified by parasitological methods or to the number of asymptomatic carriers in areas of endemicity. PCR for patients with HIV-VL coinfection showed high diagnostic accuracy in buffy coat and bone marrow, ranging from 93.1 to 96.9%. Molecular tools are highly sensitive assays for Leishmania detection and may contribute as an additional test in the algorithm, together with a clear clinical case definition. We observed wide variety in reference standards and study designs and now recommend consecutively designed studies. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Takenouchi, Osamu; Fukui, Shiho; Okamoto, Kenji; Kurotani, Satoru; Imai, Noriyasu; Fujishiro, Miyuki; Kyotani, Daiki; Kato, Yoshinao; Kasahara, Toshihiko; Fujita, Masaharu; Toyoda, Akemi; Sekiya, Daisuke; Watanabe, Shinichi; Seto, Hirokazu; Hirota, Morihiko; Ashikaga, Takao; Miyazawa, Masaaki
2015-11-01
To develop a testing strategy incorporating the human cell line activation test (h-CLAT), direct peptide reactivity assay (DPRA) and DEREK, we created an expanded data set of 139 chemicals (102 sensitizers and 37 non-sensitizers) by combining the existing data set of 101 chemicals through the collaborative projects of Japan Cosmetic Industry Association. Of the additional 38 chemicals, 15 chemicals with relatively low water solubility (log Kow > 3.5) were selected to clarify the limitation of testing strategies regarding the lipophilic chemicals. Predictivities of the h-CLAT, DPRA and DEREK, and the combinations thereof were evaluated by comparison to results of the local lymph node assay. When evaluating 139 chemicals using combinations of three methods based on integrated testing strategy (ITS) concept (ITS-based test battery) and a sequential testing strategy (STS) weighing the predictive performance of the h-CLAT and DPRA, overall similar predictivities were found as before on the 101 chemical data set. An analysis of false negative chemicals suggested a major limitation of our strategies was the testing of low water-soluble chemicals. When excluded the negative results for chemicals with log Kow > 3.5, the sensitivity and accuracy of ITS improved to 97% (91 of 94 chemicals) and 89% (114 of 128). Likewise, the sensitivity and accuracy of STS to 98% (92 of 94) and 85% (111 of 129). Moreover, the ITS and STS also showed good correlation with local lymph node assay on three potency classifications, yielding accuracies of 74% (ITS) and 73% (STS). Thus, the inclusion of log Kow in analysis could give both strategies a higher predictive performance. Copyright © 2015 John Wiley & Sons, Ltd.
Optimizing Complexity Measures for fMRI Data: Algorithm, Artifact, and Sensitivity
Rubin, Denis; Fekete, Tomer; Mujica-Parodi, Lilianne R.
2013-01-01
Introduction Complexity in the brain has been well-documented at both neuronal and hemodynamic scales, with increasing evidence supporting its use in sensitively differentiating between mental states and disorders. However, application of complexity measures to fMRI time-series, which are short, sparse, and have low signal/noise, requires careful modality-specific optimization. Methods Here we use both simulated and real data to address two fundamental issues: choice of algorithm and degree/type of signal processing. Methods were evaluated with regard to resilience to acquisition artifacts common to fMRI as well as detection sensitivity. Detection sensitivity was quantified in terms of grey-white matter contrast and overlap with activation. We additionally investigated the variation of complexity with activation and emotional content, optimal task length, and the degree to which results scaled with scanner using the same paradigm with two 3T magnets made by different manufacturers. Methods for evaluating complexity were: power spectrum, structure function, wavelet decomposition, second derivative, rescaled range, Higuchi’s estimate of fractal dimension, aggregated variance, and detrended fluctuation analysis. To permit direct comparison across methods, all results were normalized to Hurst exponents. Results Power-spectrum, Higuchi’s fractal dimension, and generalized Hurst exponent based estimates were most successful by all criteria; the poorest-performing measures were wavelet, detrended fluctuation analysis, aggregated variance, and rescaled range. Conclusions Functional MRI data have artifacts that interact with complexity calculations in nontrivially distinct ways compared to other physiological data (such as EKG, EEG) for which these measures are typically used. Our results clearly demonstrate that decisions regarding choice of algorithm, signal processing, time-series length, and scanner have a significant impact on the reliability and sensitivity of complexity estimates. PMID:23700424
Intergrated Systems Biology Approach for Ovarian Cancer Biomarker Discovery — EDRN Public Portal
The overall objective is to validate serum protein markers for early diagnosis of ovarian cancer with the ultimate goal being to develop a multiparametric panel consisting of 2-4 novel markers with 10 known markers for phase 3 analysis. In phase 1, we will screen for markers able to pass a threshold of 98% specificity and 30% sensitivity in a cohort of 300 women. Markers that pass phase 1 validation will be investigated in a phase 2 PRoBE cohort with a 98% specificity and 70% sensitivity cut-off. Finally, markers that pass phase 2 validation will be evaluated in EDRN CVC laboratory specimens with a cut-off of > 98% specificity and 90% sensitivity.
First- and second-order sensitivity analysis of linear and nonlinear structures
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Mroz, Z.
1986-01-01
This paper employs the principle of virtual work to derive sensitivity derivatives of structural response with respect to stiffness parameters using both direct and adjoint approaches. The computations required are based on additional load conditions characterized by imposed initial strains, body forces, or surface tractions. As such, they are equally applicable to numerical or analytical solution techniques. The relative efficiency of various approaches for calculating first and second derivatives is assessed. It is shown that for the evaluation of second derivatives the most efficient approach is one that makes use of both the first-order sensitivities and adjoint vectors. Two example problems are used for demonstrating the various approaches.
Mukhtar, Hussnain; Lin, Yu-Pin; Shipin, Oleg V.; Petway, Joy R.
2017-01-01
This study presents an approach for obtaining realization sets of parameters for nitrogen removal in a pilot-scale waste stabilization pond (WSP) system. The proposed approach was designed for optimal parameterization, local sensitivity analysis, and global uncertainty analysis of a dynamic simulation model for the WSP by using the R software package Flexible Modeling Environment (R-FME) with the Markov chain Monte Carlo (MCMC) method. Additionally, generalized likelihood uncertainty estimation (GLUE) was integrated into the FME to evaluate the major parameters that affect the simulation outputs in the study WSP. Comprehensive modeling analysis was used to simulate and assess nine parameters and concentrations of ON-N, NH3-N and NO3-N. Results indicate that the integrated FME-GLUE-based model, with good Nash–Sutcliffe coefficients (0.53–0.69) and correlation coefficients (0.76–0.83), successfully simulates the concentrations of ON-N, NH3-N and NO3-N. Moreover, the Arrhenius constant was the only parameter sensitive to model performances of ON-N and NH3-N simulations. However, Nitrosomonas growth rate, the denitrification constant, and the maximum growth rate at 20 °C were sensitive to ON-N and NO3-N simulation, which was measured using global sensitivity. PMID:28704958
Sedentary Behaviour Profiling of Office Workers: A Sensitivity Analysis of Sedentary Cut-Points
Boerema, Simone T.; Essink, Gerard B.; Tönis, Thijs M.; van Velsen, Lex; Hermens, Hermie J.
2015-01-01
Measuring sedentary behaviour and physical activity with wearable sensors provides detailed information on activity patterns and can serve health interventions. At the basis of activity analysis stands the ability to distinguish sedentary from active time. As there is no consensus regarding the optimal cut-point for classifying sedentary behaviour, we studied the consequences of using different cut-points for this type of analysis. We conducted a battery of sitting and walking activities with 14 office workers, wearing the Promove 3D activity sensor to determine the optimal cut-point (in counts per minute (m·s−2)) for classifying sedentary behaviour. Then, 27 office workers wore the sensor for five days. We evaluated the sensitivity of five sedentary pattern measures for various sedentary cut-points and found an optimal cut-point for sedentary behaviour of 1660 × 10−3 m·s−2. Total sedentary time was not sensitive to cut-point changes within ±10% of this optimal cut-point; other sedentary pattern measures were not sensitive to changes within the ±20% interval. The results from studies analyzing sedentary patterns, using different cut-points, can be compared within these boundaries. Furthermore, commercial, hip-worn activity trackers can implement feedback and interventions on sedentary behaviour patterns, using these cut-points. PMID:26712758
2014-12-26
additive value function, which assumes mutual preferential independence (Gregory S. Parnell, 2013). In other words, this method can be used if the... additive value function method to calculate the aggregate value of multiple objectives. Step 9 : Sensitivity Analysis Once the global values are...gravity metric, the additive method will be applied using equal weights for each axis value function. Pilot Satisfaction (Usability) As expressed
Jiang, Sha-Yi; Yang, Jing-Wei; Shao, Jing-Bo; Liao, Xue-Lian; Lu, Zheng-Hua; Jiang, Hui
2016-05-01
In this meta-analysis, we evaluated the diagnostic role of Epstein-Barr virus deoxyribonucleic acid detection and quantitation in the serum of pediatric and young adult patients with infectious mononucleosis. The primary outcome of this meta-analysis was the sensitivity and specificity of Epstein-Barr virus (EBV) deoxyribonucleic acid (DNA) detection and quantitation using polymerase chain reaction (PCR). A systematic review and meta-analysis was performed by searching for articles that were published through September 24, 2014 in the following databases: Medline, Cochrane, EMBASE, and Google Scholar. The following keywords were used for the search: "Epstein-Barr virus," "infectious mononucleosis," "children/young adults/infant/pediatric," and "polymerase chain reaction or PCR." Three were included in this analysis. We found that for detection by PCR, the pooled sensitivity for detecting EBV DNA was 77% (95%CI, 66-86%) and the pooled specificity for was 98% (95%CI, 93-100%). Our findings indicate that this PCR-based assay has high specificity and good sensitivity for detecting of EBV DNA, indicating it may useful for identifying patients with infectious mononucleosis. This assay may also be helpful to identify young athletic patients or highly physically active pediatric patients who are at risk for a splenic rupture due to acute infectious mononucleosis. © 2015 Wiley Periodicals, Inc.
Strati, Paolo; Uhm, Joon H; Kaufmann, Timothy J; Nabhan, Chadi; Parikh, Sameer A; Hanson, Curtis A; Chaffee, Kari G; Call, Timothy G; Shanafelt, Tait D
2016-04-01
Abroad array of conditions can lead to neurological symptoms in chronic lymphocytic leukemia patients and distinguishing between clinically significant involvement of the central nervous system by chronic lymphocytic leukemia and symptoms due to other etiologies can be challenging. Between January 1999 and November 2014, 172 (4%) of the 4174 patients with chronic lymphocytic leukemia followed at our center had a magnetic resonance imaging of the central nervous system and/or a lumbar puncture to evaluate neurological symptoms. After comprehensive evaluation, the etiology of neurological symptoms was: central nervous system chronic lymphocytic leukemia in 18 patients (10% evaluated by imaging and/or lumbar puncture, 0.4% overall cohort); central nervous system Richter Syndrome in 15 (9% evaluated, 0.3% overall); infection in 40 (23% evaluated, 1% overall); autoimmune/inflammatory conditions in 28 (16% evaluated, 0.7% overall); other cancer in 8 (5% evaluated, 0.2% overall); and another etiology in 63 (37% evaluated, 1.5% overall). Although the sensitivity of cerebrospinal fluid analysis to detect central nervous system disease was 89%, the specificity was only 42% due to the frequent presence of leukemic cells in the cerebrospinal fluid in other conditions. No parameter on cerebrospinal fluid analysis (e.g. total nucleated cells, total lymphocyte count, chronic lymphocytic leukemia cell percentage) were able to offer a reliable discrimination between patients whose neurological symptoms were due to clinically significant central nervous system involvement by chronic lymphocytic leukemia and another etiology. Median overall survival among patients with clinically significant central nervous system chronic lymphocytic leukemia and Richter syndrome was 12 and 11 months, respectively. In conclusion, clinically significant central nervous system involvement by chronic lymphocytic leukemia is a rare condition, and neurological symptoms in patients with chronic lymphocytic leukemia are due to other etiologies in approximately 80% of cases. Analysis of the cerebrospinal fluid has high sensitivity but limited specificity to distinguish clinically significant chronic lymphocytic leukemia involvement from other etiologies. Copyright© Ferrata Storti Foundation.
Gundogan, Fatih Cakir; Dinç, Umut Asli; Erdem, Uzeyir; Ozge, Gokhan; Sobaci, Gungor
2010-01-01
To study multifocal electroretinogram (mfERG) and its relation to retinal sensitivity assessed by Humphrey visual field (HVF) analysis in central areolar choroidal dystrophy (CACD). Seven eyes of 4 patients with CACD and 15 normal control subjects were examined. mfERG and central 30/2 HVF were tested for each participant. Ring analysis in mfERG was evaluated. HVF results were evaluated in 5 concentric rings in order to compare the results to concentric ring analysis in mfERG. The differences between control subjects and patients were evaluated by Mann-Whitney U test and the correlations were assessed by Spearman test. Mean Snellen acuity was 0.49+/-0.10 in patients. HVF revealed central scotoma in 6 of 7 eyes (85.7%), whereas a paracentral scotoma extending to fixation point was detected in 1 eye. The retinal sensitivities in 5 concentric rings in HVF were significantly lower (p<0.001 for ring 1 to ring 4, and p=0.017 in ring 5) in CACD patients. Similarly, CACD patients had lower P1/N1 amplitudes (p<0.05) and delayed P1/N1 implicit times (p<0.05). In CACD, in the areas of scotoma detected by HVF, mfERG values were depressed. However, both mfERG and HVF abnormalities were found outside the areas of ophthalmoscopically normal retinal areas.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
ASME B89.4.19 Performance Evaluation Tests and Geometric Misalignments in Laser Trackers
Muralikrishnan, B.; Sawyer, D.; Blackburn, C.; Phillips, S.; Borchardt, B.; Estler, W. T.
2009-01-01
Small and unintended offsets, tilts, and eccentricity of the mechanical and optical components in laser trackers introduce systematic errors in the measured spherical coordinates (angles and range readings) and possibly in the calculated lengths of reference artifacts. It is desirable that the tests described in the ASME B89.4.19 Standard [1] be sensitive to these geometric misalignments so that any resulting systematic errors are identified during performance evaluation. In this paper, we present some analysis, using error models and numerical simulation, of the sensitivity of the length measurement system tests and two-face system tests in the B89.4.19 Standard to misalignments in laser trackers. We highlight key attributes of the testing strategy adopted in the Standard and propose new length measurement system tests that demonstrate improved sensitivity to some misalignments. Experimental results with a tracker that is not properly error corrected for the effects of the misalignments validate claims regarding the proposed new length tests. PMID:27504211
NASA Technical Reports Server (NTRS)
1971-01-01
An investigation into the electrostatic phenomena associated with the manufacturing and handling of explosives is discussed. The testing includes measurement of the severity of the primary charge generation mechanism, triboelectric effects between dissimilar surfaces; refinement of equivalent circuits of the XM15/XM165 and E8 fuse trains; evaluation of the electrostatic spark discharge characteristics predicted by an equivalent circuit analysis; and determination of the spark ignition sensitivity of materials, components, junctions, and subassemblies which compose the XM15/XM165 and E8 units. Special studies were also performed. These special tests included ignition sensitivity of the complete XM15 fuse train when subjected to discharges through its entire length, measurement of electrostatic potentials which occur during the E8 foaming operation during fabrication, and investigation of the inadvertent functioning of an XM15 cluster during manufacturing. The test results are discussed and related to the effectiveness of suggested modification to reduce the electrostatic ignition sensitivity.
Evaluation of sensitivity and selectivity of piezoresistive cantilever-array sensors
NASA Astrophysics Data System (ADS)
Yoshikawa, Genki; Lang, Hans-Peter; Staufer, Urs; Vettiger, Peter; Sakurai, Toshio; Gerber, Christoph
2008-03-01
Microfabricated cantilever-array sensors have attracted much attention in recent years due to their real-time detection of low concentration of molecules. Since the piezoresistive cantilever-array sensors do not require a bulky and expensive optical read-out system, they possess many advantages compared with optical read-out cantilever-array sensors. They can be miniaturized and integrated into a match-box sized device. In this study, we present the piezoresistive cantilever-array sensor system and evaluate its sensitivity and selectivity using various vapors of molecules, including alkane molecules with different chain length from 5 (n-pentane) to 12 (n-dodecane). Piezoresistive cantilevers were coated with different polymers (PVP, PAAM, PEI, and PVA) using an inkjet spotter. Each cantilever has a reference cantilever, constituting a Wheatstone-bridge. Each vapor was mixed with a constant nitrogen gas flow and introduced into the measurement chamber. According to the principle component analysis of data obtained, each molecule can be clearly distinguished from others. We also confirmed that this piezoresistive cantilever-array sensor system has sub-ppm sensitivity.
Loewenstein, Anat; Ferencz, Joseph R; Lang, Yaron; Yeshurun, Itamar; Pollack, Ayala; Siegal, Ruth; Lifshitz, Tova; Karp, Joseph; Roth, Daniel; Bronner, Guri; Brown, Justin; Mansour, Sam; Friedman, Scott; Michels, Mark; Johnston, Richards; Rapp, Moshe; Havilio, Moshe; Rafaeli, Omer; Manor, Yair
2010-01-01
The primary purpose of this study was to evaluate the ability of a home device preferential hyperacuity perimeter to discriminate between patients with choroidal neovascularization (CNV) and intermediate age-related macular degeneration (AMD), and the secondary purpose was to investigate the dependence of sensitivity on lesion characteristics. All participants were tested with the home device in an unsupervised mode. The first part of this work was retrospective using tests performed by patients with intermediate AMD and newly diagnosed CNV. In the second part, the classifier was prospectively challenged with tests performed by patients with intermediate AMD and newly diagnosed CNV. The dependence of sensitivity on lesion characteristics was estimated with tests performed by patients with CNV of both parts. In 66 eyes with CNV and 65 eyes with intermediate AMD, both sensitivity and specificity were 0.85. In the retrospective part (34 CNV and 43 intermediate AMD), sensitivity and specificity were 0.85 +/- 0.12 (95% confidence interval) and 0.84 +/- 0.11 (95% confidence interval), respectively. In the prospective part (32 CNV and 22 intermediate AMD), sensitivity and specificity were 0.84 +/- 0.13 (95% confidence interval) and 0.86 +/- 0.14 (95% confidence interval), respectively. Chi-square analysis showed no dependence of sensitivity on type (P = 0.44), location (P = 0.243), or size (P = 0.73) of the CNV lesions. A home device preferential hyperacuity perimeter has good sensitivity and specificity in discriminating between patients with newly diagnosed CNV and intermediate AMD. Sensitivity is not dependent on lesion characteristics.
PREVALENCE OF METABOLIC SYNDROME IN YOUNG MEXICANS: A SENSITIVITY ANALYSIS ON ITS COMPONENTS.
Murguía-Romero, Miguel; Jiménez-Flores, J Rafael; Sigrist-Flores, Santiago C; Tapia-Pancardo, Diana C; Jiménez-Ramos, Arnulfo; Méndez-Cruz, A René; Villalobos-Molina, Rafael
2015-07-28
obesity is a worldwide epidemic, and the high prevalence of diabetes type II (DM2) and cardiovascular disease (CVD) is in great part a consequence of that epidemic. Metabolic syndrome is a useful tool to estimate the risk of a young population to evolve to DM2 and CVD. to estimate the MetS prevalence in young Mexicans, and to evaluate each parameter as an independent indicator through a sensitivity analysis. the prevalence of MetS was estimated in 6 063 young of the México City metropolitan area. A sensitivity analysis was conducted to estimate the performance of each one of the components of MetS, as an indicator of the presence of MetS itself. Five statistical of the sensitivity analysis were calculated for each MetS component and the other parameters included: sensitivity, specificity, positive predictive value or precision, negative predictive value, and accuracy. the prevalence of MetS in Mexican young population was estimated to be 13.4%. Waist circumference presented the highest sensitivity (96.8% women; 90.0% men), blood pressure presented the highest specificity for women (97.7%) and glucose for men (91.0%). When all the five statistical are considered triglycerides is the component with the highest values, showing a value of 75% or more in four of them. Differences by sex are detected for averages of all components of MetS in young without alterations. Mexican young are highly prone to acquire MetS: 71% have at least one and up to five MetS parameters altered, and 13.4% of them have MetS. From all the five components of MetS, waist circumference presented the highest sensitivity as a predictor of MetS, and triglycerides is the best parameter if a single factor is to be taken as sole predictor of MetS in Mexican young population, triglycerides is also the parameter with the highest accuracy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Kondo, Takeshi; Fujioka, Mari; Tsuda, Masumi; Murai, Kazunori; Yamaguchi, Kohei; Miyagishima, Takuto; Shindo, Motohiro; Nagashima, Takahiro; Wakasa, Kentaro; Fujimoto, Nozomu; Yamamoto, Satoshi; Yonezumi, Masakatsu; Saito, Souichi; Sato, Shinji; Ogawa, Kazuei; Chou, Takaaki; Watanabe, Reiko; Kato, Yuichi; Takahashi, Shuichiro; Okano, Yoshiaki; Yamamoto, Joji; Ohta, Masatsugu; Iijima, Hiroaki; Oba, Koji; Kishino, Satoshi; Sakamoto, Junichi; Ishida, Yoji; Ohba, Yusuke; Teshima, Takanori
2018-05-02
Tyrosine kinase inhibitors (TKI) are used for primary therapy in patients with newly diagnosed CML. However, a reliable method for optimal selection of a TKI from the viewpoint of drug sensitivity of CML cells has not been established. We have developed a FRET-based drug sensitivity test in which a CrkL-derived fluorescent biosensor efficiently quantifies the kinase activity of BCR-ABL of living cells and sensitively evaluates the inhibitory activity of a TKI against BCR-ABL. Here, we validated the utility of the FRET-based drug sensitivity test carried out at diagnosis for predicting the molecular efficacy. Sixty-two patients with newly diagnosed chronic phase CML were enrolled in this study and treated with dasatinib. Bone marrow cells at diagnosis were subjected to FRET analysis. The ΔFRET value was calculated by subtraction of FRET efficiency in the presence of dasatinib from that in the absence of dasatinib. Treatment response was evaluated every 3 months by the BCR-ABL1 International Scale. Based on the ΔFRET value and molecular response, a threshold of the ΔFRET value in the top 10% of FRET efficiency was set to 0.31. Patients with ΔFRET value ≥0.31 had significantly superior molecular responses (MMR at 6 and 9 months and both MR4 and MR4.5 at 6, 9, and 12 months) compared with the responses in patients with ΔFRET value <0.31. These results suggest that the FRET-based drug sensitivity test at diagnosis can predict early and deep molecular responses. This study is registered with UMIN Clinical Trials Registry (UMIN000006358). © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.
Banal, F; Dougados, M; Combescure, C; Gossec, L
2009-07-01
To evaluate the ability of the widely used ACR set of criteria (both list and tree format) to diagnose RA compared with expert opinion according to disease duration. A systematic literature review was conducted in PubMed and Embase databases. All articles reporting the prevalence of RA according to ACR criteria and expert opinion in cohorts of early (<1 year duration) or established (>1 year) arthritis were analysed to calculate the sensitivity and specificity of ACR 1987 criteria against the "gold standard" (expert opinion). A meta-analysis using a summary receiver operating characteristic (SROC) curve was performed and pooled sensitivity and specificity were calculated with confidence intervals. Of 138 publications initially identified, 19 were analysable (total 7438 patients, 3883 RA). In early arthritis, pooled sensitivity and specificity of the ACR set of criteria were 77% (68% to 84%) and 77% (68% to 84%) in the list format versus 80% (72% to 88%) and 33% (24% to 43%) in the tree format. In established arthritis, sensitivity and specificity were respectively 79% (71% to 85%) and 90% (84% to 94%) versus 80% (71% to 85%) and 93% (86% to 97%). The SROC meta-analysis confirmed the statistically significant differences, suggesting that diagnostic performances of ACR list criteria are better in established arthritis. The specificity of ACR 1987 criteria in early RA is low, and these criteria should not be used as diagnostic tools. Sensitivity and specificity in established RA are higher, which reflects their use as classification criteria gold standard.
Westcott, Jillian D; Stryhn, Henrik; Burka, John F; Hammell, K Larry
2008-04-01
A bioassay for sea lice Lepeophtheirus salmonis sensitivity towards emamectin benzoate (EMB) was validated for field use. A probit regression model with natural responsiveness was used for the number of affected (moribund or dead) sea lice in bioassays involving different concentrations of EMB. Bioassay optimization included an evaluation of the inter-rater reliability of sea lice responsiveness to EMB and an evaluation of gender-related differences in susceptibility. Adoption of a set of bioassay response criteria improved the concordance (evaluated using the concordance correlation coefficient) between raters' assessments and the model estimation of EC50 values (the 'effective concentration' leading to a response of 50% of the lice not prone to natural response). An evaluation of gender-related differences in EMB susceptibility indicated that preadult stage female sea lice exhibited a significantly larger sensitivity towards EMB in 12 of 19 bioassays compared to preadult males. In order to evaluate sea lice sensitivity to EMB in eastern Canada, the intensive salmon farming area in the Bay of Fundy in southwestern New Brunswick was divided into 4 distinct regions based on industry health management practices and hydrographics. A total of 38 bioassays were completed from 2002 to 2005 using populations of preadult stage sea lice collected from Atlantic salmon Salmo salar farms within the 4 described regions. There was no significant overall effect of region or year on EC50 values; however, analysis of variance indicated a significant effect of time of year on EC50 values in 2002 and a potential effect in 2004 to 2005. Although the range of EC50 values obtained in this 3 yr study did not appear sufficient to affect current clinical success in the control of sea lice, the results suggest a seasonal- or temperature-associated variation in sensitivity to EMB. This will need to be considered if changes in EMB efficacy occur in the future.
NASA Astrophysics Data System (ADS)
Ferrara, R.; Leonardi, G.; Jourdan, F.
2013-09-01
A numerical model to predict train-induced vibrations is presented. The dynamic computation considers mutual interactions in vehicle/track coupled systems by means of a finite and discrete elements method. The rail defects and the case of out-of-round wheels are considered. The dynamic interaction between the wheel-sets and the rail is accomplished by using the non-linear Hertzian model with hysteresis damping. A sensitivity analysis is done to evaluate the variables affecting more the maintenance costs. The rail-sleeper contact is assumed extended to an area-defined contact zone, rather than a single-point assumption which fits better real case studies. Experimental validations show how prediction fits well experimental data.
NASA Astrophysics Data System (ADS)
TÜdeş, Şule; Kumlu, Kadriye Burcu Yavuz
2017-10-01
Each stage of the planning process should be based on the natural resource protection, in the sense of environmental sensitive and sustainable urban planning. Values, which are vital for the continuity of the life in the Earth, as soil, water, forest etc. should be protected from the undesired effects of the pollution and the other effects caused by the high urbanization levels. In this context, GIS-MCDM based solid waste landfill site selection is applied for Izmir, Turkey, where is a significant attraction place for tourism. As Multi criteria Decision Making (MCDM) technique, Analytical Hierarchy Process (AHP) is used. In this study, geological, tectonically and hydrological data, as well as agricultural land use, slope, distance to the settlement areas and the highways are used as inputs for AHP analysis. In the analysis stage, those inputs are rated and weighted. The weighted criteria are evaluated via GIS, by using weighted overlay tool. Therefore, an upper-scale analysis is conducted and a map, which shows the alternative places for the solid waste landfill sites, considering the environmental protection and evaluated in the context of environmental and urban criteria, are obtained.
Barreiros, Willian; Teodoro, George; Kurc, Tahsin; Kong, Jun; Melo, Alba C. M. A.; Saltz, Joel
2017-01-01
We investigate efficient sensitivity analysis (SA) of algorithms that segment and classify image features in a large dataset of high-resolution images. Algorithm SA is the process of evaluating variations of methods and parameter values to quantify differences in the output. A SA can be very compute demanding because it requires re-processing the input dataset several times with different parameters to assess variations in output. In this work, we introduce strategies to efficiently speed up SA via runtime optimizations targeting distributed hybrid systems and reuse of computations from runs with different parameters. We evaluate our approach using a cancer image analysis workflow on a hybrid cluster with 256 nodes, each with an Intel Phi and a dual socket CPU. The SA attained a parallel efficiency of over 90% on 256 nodes. The cooperative execution using the CPUs and the Phi available in each node with smart task assignment strategies resulted in an additional speedup of about 2×. Finally, multi-level computation reuse lead to an additional speedup of up to 2.46× on the parallel version. The level of performance attained with the proposed optimizations will allow the use of SA in large-scale studies. PMID:29081725
Microsatellite instability in prostate cancer by PCR or next-generation sequencing.
Hempelmann, Jennifer A; Lockwood, Christina M; Konnick, Eric Q; Schweizer, Michael T; Antonarakis, Emmanuel S; Lotan, Tamara L; Montgomery, Bruce; Nelson, Peter S; Klemfuss, Nola; Salipante, Stephen J; Pritchard, Colin C
2018-04-17
Microsatellite instability (MSI) is now being used as a sole biomarker to guide immunotherapy treatment for men with advanced prostate cancer. Yet current molecular diagnostic tests for MSI have not been evaluated for use in prostate cancer. We evaluated two next-generation sequencing (NGS) MSI-detection methods, MSIplus (18 markers) and MSI by Large Panel NGS (> 60 markers), and compared the performance of each NGS method to the most widely used 5-marker MSI-PCR detection system. All methods were evaluated by comparison to targeted whole gene sequencing of DNA mismatch-repair genes, and immunohistochemistry for mismatch repair genes, where available. In a set of 91 prostate tumors with known mismatch repair status (29-deficient and 62-intact mismatch-repair) MSIplus had a sensitivity of 96.6% (28/29) and a specificity of 100% (62/62), MSI by Large Panel NGS had a sensitivity of 93.1% (27/29) and a specificity of 98.4% (61/62), and MSI-PCR had a sensitivity of 72.4% (21/29) and a specificity of 100% (62/62). We found that the widely used 5-marker MSI-PCR panel has inferior sensitivity when applied to prostate cancer and that NGS testing with an expanded panel of markers performs well. In addition, NGS methods offer advantages over MSI-PCR, including no requirement for matched non-tumor tissue and an automated analysis pipeline with quantitative interpretation of MSI-status.
Highly sensitive and quantitative evaluation of the EGFR T790M mutation by nanofluidic digital PCR.
Iwama, Eiji; Takayama, Koichi; Harada, Taishi; Okamoto, Isamu; Ookubo, Fumihiko; Kishimoto, Junji; Baba, Eishi; Oda, Yoshinao; Nakanishi, Yoichi
2015-08-21
The mutation of T790M in EGFR is a major mechanism of resistance to treatment with EGFR-TKIs. Only qualitative detection (presence or absence) of T790M has been described to date, however. Digital PCR (dPCR) analysis has recently been applied to the quantitative detection of target molecules in cancer with high sensitivity. In the present study, 25 tumor samples (13 obtained before and 12 after EGFR-TKI treatment) from 18 NSCLC patients with activating EGFR mutations were evaluated for T790M with dPCR. The ratio of the number of T790M alleles to that of activating mutation alleles (T/A) was determined. dPCR detected T790M in all 25 samples. Although T790M was present in all pre-TKI samples from 13 patients, 10 of these patients had a low T/A ratio and manifested substantial tumor shrinkage during treatment with EGFR-TKIs. In six of seven patients for whom both pre- and post-TKI samples were available, the T/A ratio increased markedly during EGFR-TKI treatment. Highly sensitive dPCR thus detected T790M in all NSCLC patients harboring activating EGFR mutations whether or not they had received EGFR-TKI treatment. Not only highly sensitive but also quantitative detection of T790M is important for evaluation of the contribution of T790M to EGFR-TKI resistance.
Illek, Beate; Lei, Dachuan; Fischer, Horst; Gruenert, Dieter C
2010-01-01
While the Cl(-) efflux assays are relatively straightforward, their ability to assess the efficacy of phenotypic correction in cystic fibrosis (CF) tissue or cells may be limited. Accurate assessment of therapeutic efficacy, i.e., correlating wild type CF transmembrane conductance regulator (CFTR) levels with phenotypic correction in tissue or individual cells, requires a sensitive assay. Radioactive chloride ((36)Cl) efflux was compared to Ussing chamber analysis for measuring cAMP-dependent Cl(-) transport in mixtures of human normal (16HBE14o-) and cystic fibrosis (CF) (CFTE29o- or CFBE41o-, respectively) airway epithelial cells. Cell mixtures with decreasing amounts of 16HBE14o- cells were evaluated. Efflux and Ussing chamber studies on mixed populations of normal and CF airway epithelial cells showed that, as the number of CF cells within the population was progressively increased, the cAMP-dependent Cl(-) decreased. The (36)Cl efflux assay was effective for measuring Cl(-) transport when ≥ 25% of the cells were normal. If < 25% of the cells were phenotypically wild-type (wt), the (36)Cl efflux assay was no longer reliable. Polarized CFBE41o- cells, also homozygous for the ΔF508 mutation, were used in the Ussing chamber studies. Ussing analysis detected cAMP-dependent Cl(-) currents in mixtures with ≥1% wild-type cells indicating that Ussing analysis is more sensitive than (36)Cl efflux analysis for detection of functional CFTR. Assessment of CFTR function by Ussing analysis is more sensitive than (36)Cl efflux analysis. Ussing analysis indicates that cell mixtures containing 10% 16HBE14o- cells showed 40-50% of normal cAMP-dependent Cl(-) transport that drops off exponentially between 10-1% wild-type cells. Copyright © 2010 S. Karger AG, Basel.
2013-12-19
32 3.3 An Approach for Evaluating System-of-Systems Operational Benefits of a...delay of a flight under IMC ............................................... 41 Figure 15: Sensitivity of delay of each of the four segments to...85 Figure 43: Generic SoS node behaviors
The United States Environmental Protection Agency's National Exposure Research Laboratory has initiated a project to improve the methodology for modeling human exposure to motor vehicle emissions. The overall project goal is to develop improved methods for modeling the source t...
A microscale emission factor model (MicroFacPM) for predicting real-time site-specific motor vehicle particulate matter emissions was presented in the companion paper entitled "Development of a Microscale Emission Factor Model for Particulate Matter (MicroFacPM) for Predicting Re...
Global sensitivity analysis of a dynamic agroecosystem model under different irrigation treatments
USDA-ARS?s Scientific Manuscript database
Savings in consumptive use through limited or deficit irrigation in agriculture has become an increasingly viable source of additional water for places with high population growth such as the Colorado Front Range, USA. Crop models provide a mechanism to evaluate various management methods without pe...
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
Theodore, M Jordan; Anderson, Raydel D; Wang, Xin; Katz, Lee S; Vuong, Jeni T; Bell, Melissa E; Juni, Billie A; Lowther, Sara A; Lynfield, Ruth; MacNeil, Jessica R; Mayer, Leonard W
2012-04-01
PCR detecting the protein D (hpd) and fuculose kinase (fucK) genes showed high sensitivity and specificity for identifying Haemophilus influenzae and differentiating it from H. haemolyticus. Phylogenetic analysis using the 16S rRNA gene demonstrated two distinct groups for H. influenzae and H. haemolyticus.
NASA Astrophysics Data System (ADS)
Pan, Boan; Liu, Weichao; Fang, Xiang; Huang, Xiaobo; Li, Ting
2018-02-01
Brain death is defined as permanent loss of the brain functions. The evaluation of it has many meanings, such as the relief of organ transplantation stress and family burden. However, it is hard to be judged precisely. The standard clinical tests are expensive, time consuming and even dangerous, and some auxiliary methods have limitations. Functional near infrared spectroscopy (fNIRS), monitoring cerebral hemodynamic responses noninvasively, evaluate brain death in some papers published, but there is no discussion about which experimental mode can monitor brain death patient more sensitively. Here, we attempt to use our fNIRS to evaluate brain death and find which experimental mode is effective. In order to discuss the problem, we detected eleven brain death patients and twenty normal patients under natural state. They were provided different fraction of inspiration O2 (FIO2) in different phase. We found that the ratio of Δ[HbO2] (the concentration changes in oxyhemoglobin) to Δ[Hb] (the concentration changes in deoxyhemoglobin) in brain death patients is significantly higher than normal patients in FIO2 experiment. Combined with the data analysis result, restore oxygen change process and low-high-low paradigm is more sensitively.
Ahn, C Y; DeBruhl, N D; Gorczyca, D P; Shaw, W W; Bassett, L W
1994-10-01
With the current controversy regarding the safety of silicone implants, the detection and evaluation of implant rupture are causing concern for both plastic surgeons and patients. Our study obtained comparative value analysis of mammography, sonography, and magnetic resonance imaging (MRI) in the detection of silicone implant rupture. Twenty-nine symptomatic patients (total of 59 silicone implants) were entered into the study. Intraoperative findings revealed 21 ruptured implants (36 percent). During physical examination, a positive "squeeze test" was highly suggestive of implant rupture. Mammograms were obtained of 51 implants (sensitivity 11 percent, specificity 89 percent). Sonography was performed on 57 implants (sensitivity 70 percent, specificity 92 percent). MRI was performed on 55 implants (sensitivity 81 percent, specificity 92 percent). Sonographically, implant rupture is demonstrated by the "stepladder sign." Double-lumen implants may appear as false-positive results for rupture on sonography. On MRI, the "linguine sign" represents disrupted fragments of a ruptured implant. The most reliable imaging modality for implant rupture detection is MRI, followed by sonogram. Mammogram is the least reliable. Our study supports the clinical indication and diagnostic value of sonogram and MRI in the evaluation of symptomatic breast implant patients.
Sensitivity Analysis in Engineering
NASA Technical Reports Server (NTRS)
Adelman, Howard M. (Compiler); Haftka, Raphael T. (Compiler)
1987-01-01
The symposium proceedings presented focused primarily on sensitivity analysis of structural response. However, the first session, entitled, General and Multidisciplinary Sensitivity, focused on areas such as physics, chemistry, controls, and aerodynamics. The other four sessions were concerned with the sensitivity of structural systems modeled by finite elements. Session 2 dealt with Static Sensitivity Analysis and Applications; Session 3 with Eigenproblem Sensitivity Methods; Session 4 with Transient Sensitivity Analysis; and Session 5 with Shape Sensitivity Analysis.
Korošec, Peter; Šilar, Mira; Kopač, Peter; Eržen, Renato; Zidarn, Mihaela; Košnik, Mitja
2016-01-01
We sought to determine whether basophil-allergen sensitivity could be transferred to donor basophils by passive IgE sensitisation in allergic rhinitis and anaphylactic Hymenoptera venom hypersensitivity. We studied 15 wasp venom-, 19 grass pollen- and 2 house dust mite-allergic patients, 2 healthy donors, and 8 wasp venom-allergic donors. In all subjects, we first evaluated the initial basophil response to wasp venom, grass pollen, or house dust mite allergen. Donor basophils were then stripped, sensitised with the different patients' serum IgE, and challenged with the corresponding allergen. The CD63 response of donor basophils was then compared with initial basophil responses. In wasp venom-allergic subjects, the IgE transfer did not reflect the initial basophil-allergen sensitivity, because the venom IgE of subjects with high or low basophil sensitivity induced comparable responsiveness in healthy donor basophils. Furthermore, vice versa, when we sensitised the donor basophils of wasp venom-allergic individuals with different wasp venom or house dust mite IgE, we demonstrated that their response was predictable by their initial basophil allergen sensitivity. In the rhinitis allergy model, the IgE transfer correlated with the patients' initial basophil responsiveness because the grass pollen IgE of the subjects with high basophil allergen sensitivity induced significantly higher responsiveness of donor basophils than the IgE of subjects with initially low basophil allergen sensitivity. Our results suggest that basophil allergen sensitivity evaluated by flow-cytometric CD63 analysis depends on two distinct contribution factors. In anaphylactic Hymenoptera allergy, the major factor was intrinsic cellular sensitivity, whereas in pollen allergy, the major factor was allergen-specific IgE on the cell surface. © 2016 S. Karger AG, Basel.
Liu, Richard T; Burke, Taylor A; Abramson, Lyn Y; Alloy, Lauren B
2017-11-04
Behavioral Approach System (BAS) sensitivity has been implicated in the development of a variety of different psychiatric disorders. Prominent among these in the empirical literature are bipolar spectrum disorders (BSDs). Given that adolescence represents a critical developmental stage of risk for the onset of BSDs, it is important to clarify the latent structure of BAS sensitivity in this period of development. A statistical approach especially well-suited for delineating the latent structure of BAS sensitivity is taxometric analysis, which is designed to evaluate whether the latent structure of a construct is taxonic (i.e., categorical) or dimensional (i.e., continuous) in nature. The current study applied three mathematically non-redundant taxometric procedures (i.e., MAMBAC, MAXEIG, and L-Mode) to a large community sample of adolescents (n = 12,494) who completed two separate measures of BAS sensitivity: the BIS/BAS Scales Carver and White (Journal of Personality and Social Psychology, 67, 319-333. 1994) and the Sensitivity to Reward and Sensitivity to Punishment Questionnaire (Torrubia et al. Personality and Individual Differences, 31, 837-862. 2001). Given the significant developmental changes in reward sensitivity that occur across adolescence, the current investigation aimed to provide a fine-grained evaluation of the data by performing taxometric analyses at an age-by-age level (14-19 years; n for each age ≥ 883). Results derived from taxometric procedures, across all ages tested, were highly consistent, providing strong evidence that BAS sensitivity is best conceptualized as dimensional in nature. Thus, the findings suggest that BAS-related vulnerability to BSDs exists along a continuum of severity, with no natural cut-point qualitatively differentiating high- and low-risk adolescents. Clinical and research implications for the assessment of BSD-related vulnerability are discussed.
Sensitivity assessment of freshwater macroinvertebrates to pesticides using biological traits.
Ippolito, A; Todeschini, R; Vighi, M
2012-03-01
Assessing the sensitivity of different species to chemicals is one of the key points in predicting the effects of toxic compounds in the environment. Trait-based predicting methods have proved to be extremely efficient for assessing the sensitivity of macroinvertebrates toward compounds with non specific toxicity (narcotics). Nevertheless, predicting the sensitivity of organisms toward compounds with specific toxicity is much more complex, since it depends on the mode of action of the chemical. The aim of this work was to predict the sensitivity of several freshwater macroinvertebrates toward three classes of plant protection products: organophosphates, carbamates and pyrethroids. Two databases were built: one with sensitivity data (retrieved, evaluated and selected from the U.S. Environmental Protection Agency ECOTOX database) and the other with biological traits. Aside from the "traditional" traits usually considered in ecological analysis (i.e. body size, respiration technique, feeding habits, etc.), multivariate analysis was used to relate the sensitivity of organisms to some other characteristics which may be involved in the process of intoxication. Results confirmed that, besides traditional biological traits, related to uptake capability (e.g. body size and body shape) some traits more related to particular metabolic characteristics or patterns have a good predictive capacity on the sensitivity to these kinds of toxic substances. For example, behavioral complexity, assumed as an indicator of nervous system complexity, proved to be an important predictor of sensitivity towards these compounds. These results confirm the need for more complex traits to predict effects of highly specific substances. One key point for achieving a complete mechanistic understanding of the process is the choice of traits, whose role in the discrimination of sensitivity should be clearly interpretable, and not only statistically significant.
Saito, Masahide; Sano, Naoki; Shibata, Yuki; Kuriyama, Kengo; Komiyama, Takafumi; Marino, Kan; Aoki, Shinichi; Ashizawa, Kazunari; Yoshizawa, Kazuya; Onishi, Hiroshi
2018-05-01
The purpose of this study was to compare the MLC error sensitivity of various measurement devices for VMAT pre-treatment quality assurance (QA). This study used four QA devices (Scandidos Delta4, PTW 2D-array, iRT systems IQM, and PTW Farmer chamber). Nine retrospective VMAT plans were used and nine MLC error plans were generated for all nine original VMAT plans. The IQM and Farmer chamber were evaluated using the cumulative signal difference between the baseline and error-induced measurements. In addition, to investigate the sensitivity of the Delta4 device and the 2D-array, global gamma analysis (1%/1, 2%/2, and 3%/3 mm), dose difference (1%, 2%, and 3%) were used between the baseline and error-induced measurements. Some deviations of the MLC error sensitivity for the evaluation metrics and MLC error ranges were observed. For the two ionization devices, the sensitivity of the IQM was significantly better than that of the Farmer chamber (P < 0.01) while both devices had good linearly correlation between the cumulative signal difference and the magnitude of MLC errors. The pass rates decreased as the magnitude of the MLC error increased for both Delta4 and 2D-array. However, the small MLC error for small aperture sizes, such as for lung SBRT, could not be detected using the loosest gamma criteria (3%/3 mm). Our results indicate that DD could be more useful than gamma analysis for daily MLC QA, and that a large-area ionization chamber has a greater advantage for detecting systematic MLC error because of the large sensitive volume, while the other devices could not detect this error for some cases with a small range of MLC error. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
The Role of Attention in Somatosensory Processing: A Multi-trait, Multi-method Analysis
Puts, Nicolaas A. J.; Mahone, E. Mark; Edden, Richard A. E.; Tommerdahl, Mark; Mostofsky, Stewart H.
2016-01-01
Sensory processing abnormalities in autism have largely been described by parent report. This study used a multi-method (parent-report and measurement), multi-trait (tactile sensitivity and attention) design to evaluate somatosensory processing in ASD. Results showed multiple significant within-method (e.g., parent report of different traits)/cross-trait (e.g., attention and tactile sensitivity) correlations, suggesting that parent-reported tactile sensory dysfunction and performance-based tactile sensitivity describe different behavioral phenomena. Additionally, both parent-reported tactile functioning and performance-based tactile sensitivity measures were significantly associated with measures of attention. Findings suggest that sensory (tactile) processing abnormalities in ASD are multifaceted, and may partially reflect a more global deficit in behavioral regulation (including attention). Challenges of relying solely on parent-report to describe sensory difficulties faced by children/families with ASD are also highlighted. PMID:27448580
NASA Astrophysics Data System (ADS)
Sathiyaraj, P.; Samuel, E. James jebaseelan
2018-01-01
The aim of this study is to evaluate the methacrylic acid, gelatin and tetrakis (hydroxymethyl) phosphonium chloride gel (MAGAT) by cone beam computed tomography (CBCT) attached with modern linear accelerator. To compare the results of standard diagnostic computed tomography (CT) with CBCT, different parameters such as linearity, sensitivity and temporal stability were checked. MAGAT gel showed good linearity for both diagnostic CT and CBCT measurements. Sensitivity and temporal stability were also comparable with diagnostic CT measurements. In both the modalities, the sensitivity of the MAGAT increased to 4 days and decreased till the 10th day of post irradiation. Since all measurements (linearity, sensitivity and temporal stability) from diagnostic CT and CBCT were comparable, CBCT could be a potential tool for dose analysis study for polymer gel dosimeter.
Behavior sensitivities for control augmented structures
NASA Technical Reports Server (NTRS)
Manning, R. A.; Lust, R. V.; Schmit, L. A.
1987-01-01
During the past few years it has been recognized that combining passive structural design methods with active control techniques offers the prospect of being able to find substantially improved designs. These developments have stimulated interest in augmenting structural synthesis by adding active control system design variables to those usually considered in structural optimization. An essential step in extending the approximation concepts approach to control augmented structural synthesis is the development of a behavior sensitivity analysis capability for determining rates of change of dynamic response quantities with respect to changes in structural and control system design variables. Behavior sensitivity information is also useful for man-machine interactive design as well as in the context of system identification studies. Behavior sensitivity formulations for both steady state and transient response are presented and the quality of the resulting derivative information is evaluated.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Hutsell, Blake A; Negus, S Stevens; Banks, Matthew L
2015-01-01
We have previously demonstrated reductions in cocaine choice produced by either continuous 14-day phendimetrazine and d-amphetamine treatment or removing cocaine availability under a cocaine vs. food choice procedure in rhesus monkeys. The aim of the present investigation was to apply the concatenated generalized matching law (GML) to cocaine vs. food choice dose-effect functions incorporating sensitivity to both the relative magnitude and price of each reinforcer. Our goal was to determine potential behavioral mechanisms underlying pharmacological treatment efficacy to decrease cocaine choice. A multi-model comparison approach was used to characterize dose- and time-course effects of both pharmacological and environmental manipulations on sensitivity to reinforcement. GML models provided an excellent fit of the cocaine choice dose-effect functions in individual monkeys. Reductions in cocaine choice by both pharmacological and environmental manipulations were principally produced by systematic decreases in sensitivity to reinforcer price and non-systematic changes in sensitivity to reinforcer magnitude. The modeling approach used provides a theoretical link between the experimental analysis of choice and pharmacological treatments being evaluated as candidate 'agonist-based' medications for cocaine addiction. The analysis suggests that monoamine releaser treatment efficacy to decrease cocaine choice was mediated by selectively increasing the relative price of cocaine. Overall, the net behavioral effect of these pharmacological treatments was to increase substitutability of food pellets, a nondrug reinforcer, for cocaine. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Angkasekwinai, Nasikarn; Atkins, Erin H.; Romero, Sofia; Grieco, John; Chao, Chien Chung; Ching, Wei Mei
2014-01-01
Reliable laboratory testing is of great importance to detect Bartonella bacilliformis infection. We evaluated the sensitivity and specificity of the enzyme-linked immunosorbent assay (ELISA) using recombinant protein Pap31 (rPap31) for the detection of antibodies against B. bacilliformis as compared with immunofluorescent assay (IFA). Of the 302 sera collected between 1997 and 2000 among an at-risk Peruvian population, 103 and 34 samples tested positive for IFA-immunoglobulin G (IgG) and IFA-IgM, respectively. By using Youden's index, the cutoff values of ELISA-IgG at 0.915 gave a sensitivity of 84.5% and specificity of 94%. The cutoff values of ELISA-IgM at 0.634 gave a sensitivity of 88.2% and specificity of 85.1%. Using latent class analysis, estimates of sensitivity and specificity of almost all the assays were slightly higher than those of a conventional method of calculation. The test is proved beneficial for discriminating between infected and non-infected individuals with the advantage of low-cost and high-throughput capability. PMID:24515944
Metrics for Evaluating the Accuracy of Solar Power Forecasting: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, J.; Hodge, B. M.; Florita, A.
2013-10-01
Forecasting solar energy generation is a challenging task due to the variety of solar power systems and weather regimes encountered. Forecast inaccuracies can result in substantial economic losses and power system reliability issues. This paper presents a suite of generally applicable and value-based metrics for solar forecasting for a comprehensive set of scenarios (i.e., different time horizons, geographic locations, applications, etc.). In addition, a comprehensive framework is developed to analyze the sensitivity of the proposed metrics to three types of solar forecasting improvements using a design of experiments methodology, in conjunction with response surface and sensitivity analysis methods. The resultsmore » show that the developed metrics can efficiently evaluate the quality of solar forecasts, and assess the economic and reliability impact of improved solar forecasting.« less
NASA Astrophysics Data System (ADS)
Yao, Yanbo; Duan, Xiaoshuang; Luo, Jiangjiang; Liu, Tao
2017-11-01
The use of the van der Pauw (VDP) method for characterizing and evaluating the piezoresistive behavior of carbon nanomaterial enabled piezoresistive sensors have not been systematically studied. By using single-wall carbon nanotube (SWCNT) thin films as a model system, herein we report a coupled electrical-mechanical experimental study in conjunction with a multiphysics finite element simulation as well as an analytic analysis to compare the two-probe and VDP testing configuration in evaluating the piezoresistive behavior of carbon nanomaterial enabled piezoresistive sensors. The key features regarding the sample aspect ratio dependent piezoresistive sensitivity or gauge factor were identified for the VDP testing configuration. It was found that the VDP test configuration offers consistently higher piezoresistive sensitivity than the two-probe testing method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
Kong, Ling-Ying; Du, Wei; Wang, Li; Yang, Zhi; Zhang, Hong-Sheng
2015-01-01
DNA methylation has been proposed as a potential biomarker for cervical cancer detection. This study aimed to evaluate the diagnostic role of paired boxed gene 1 (PAX1) methylation for cervical cancer screening in Asians. Eligible studies were retrieved by searching the electronic databases, and the quality of the enrolled studies was assessed via the quality assessment for studies of diagnostic accuracy (QUADAS) tool. The bivariate meta-analysis model was employed to generate the summary receiver operator characteristic (SROC) curve using Stata 12.0 software. Cochran's Q test and I2 statistics were applied to assess heterogeneity among studies. Publication bias was evaluated by the Deeks' funnel plot asymmetry test. A total of 9 articles containing 15 individual studies were included. The SROC analysis showed that single PAX1 methylation allowed for the discrimination between cancer/high-grade squamous intraepithelial lesion (HSIL) patients and normal individuals with a sensitivity (95% confidence interval) of 0.80 (0.70 - 0.87) and specificity of 0.89 (0.86 - 0.92), corresponding to an area under curve (AUC) of 0.92. Notably, our subgroup analysis suggested that combing parallel testing of PAX1 methylation and HPV DNA (AUC, sensitivity, and specificity of 0.90, 0.82, and 0.84, respectively) seemed to harbor higher accuracy than single HPV DNA testing (AUC, sensitivity, and specificity of 0.81, 0.86, and 0.67, respectively). PAX1 methylation hallmarks a potential diagnostic value for cervical cancer screening in Asians, and parallel testing of PAX1 methylation and HPV in cervical scrapings confers an improved accuracy than single HPV DNA testing.
Reducing the overlay metrology sensitivity to perturbations of the measurement stack
NASA Astrophysics Data System (ADS)
Zhou, Yue; Park, DeNeil; Gutjahr, Karsten; Gottipati, Abhishek; Vuong, Tam; Bae, Sung Yong; Stokes, Nicholas; Jiang, Aiqin; Hsu, Po Ya; O'Mahony, Mark; Donini, Andrea; Visser, Bart; de Ruiter, Chris; Grzela, Grzegorz; van der Laan, Hans; Jak, Martin; Izikson, Pavel; Morgan, Stephen
2017-03-01
Overlay metrology setup today faces a continuously changing landscape of process steps. During Diffraction Based Overlay (DBO) metrology setup, many different metrology target designs are evaluated in order to cover the full process window. The standard method for overlay metrology setup consists of single-wafer optimization in which the performance of all available metrology targets is evaluated. Without the availability of external reference data or multiwafer measurements it is hard to predict the metrology accuracy and robustness against process variations which naturally occur from wafer-to-wafer and lot-to-lot. In this paper, the capabilities of the Holistic Metrology Qualification (HMQ) setup flow are outlined, in particular with respect to overlay metrology accuracy and process robustness. The significance of robustness and its impact on overlay measurements is discussed using multiple examples. Measurement differences caused by slight stack variations across the target area, called grating imbalance, are shown to cause significant errors in the overlay calculation in case the recipe and target have not been selected properly. To this point, an overlay sensitivity check on perturbations of the measurement stack is presented for improvement of the overlay metrology setup flow. An extensive analysis on Key Performance Indicators (KPIs) from HMQ recipe optimization is performed on µDBO measurements of product wafers. The key parameters describing the sensitivity to perturbations of the measurement stack are based on an intra-target analysis. Using advanced image analysis, which is only possible for image plane detection of μDBO instead of pupil plane detection of DBO, the process robustness performance of a recipe can be determined. Intra-target analysis can be applied for a wide range of applications, independent of layers and devices.
Capp, Roberta; Chang, Yuchiao; Brown, David F M
2012-01-01
Diagnosis of source of infection in patients with septic shock and severe sepsis needs to be done rapidly and accurately to guide appropriate antibiotic therapy. The purpose of this study is to evaluate the accuracy of two diagnostic studies used in the emergency department (ED) to guide diagnosis of source of infection in this patient population. This was a retrospective review of ED patients admitted to an intensive care unit with the diagnosis of severe sepsis or septic shock over a 12-month period. We evaluated accuracy of initial microscopic urine analysis testing and chest radiography in the diagnosis of urinary tract infections and pneumonia, respectively. Of the 1400 patients admitted to intensive care units, 170 patients met criteria for severe sepsis and septic shock. There were a total of 47 patients diagnosed with urinary tract infection, and their initial microscopic urine analysis with counts>10 white blood cells were 80% sensitive (95% confidence interval [CI] .66-.90) and 66% specific (95% CI .52-.77) for the positive final urine culture result. There were 85 patients with final diagnosis of pneumonia. The sensitivity and specificity of initial chest radiography were, respectively, 58% (95% CI .46-.68) and 91% (95% CI .81-.95) for the diagnosis of pneumonia. In patients with severe sepsis and septic shock, the chest radiograph has low sensitivity of 58%, whereas urine analysis has a low specificity of 66%. Given the importance of appropriate antibiotic selection and optimal but not perfect test characteristics, this population may benefit from broad-spectrum antibiotics, rather than antibiotics tailored toward a particular source of infection. Published by Elsevier Inc.
Homšak, Matjaž; Silar, Mira; Berce, Vojko; Tomazin, Maja; Skerbinjek-Kavalar, Maja; Celesnik, Nina; Košnik, Mitja; Korošec, Peter
2013-01-01
Peanut sensitization is common in children. However, it is difficult to assess which children will react mildly and which severely. This study evaluated the relevance of basophil allergen sensitivity testing to distinguish the severity of peanut allergy in children. Twenty-seven peanut-sensitized children with symptoms varying from mild symptoms to severe anaphylaxis underwent peanut CD63 dose-response curve analysis with the inclusion of basophil allergen sensitivity calculation (CD-sens) and peanut component immunoglobulin E (IgE) testing. Eleven children who had experienced anaphylaxis to peanuts showed a markedly higher peanut CD63 response at submaximal allergen concentrations and CD-sens (median 1,667 vs. 0.5; p < 0.0001) than 16 children who experienced a milder reaction. Furthermore, a negative or low CD-sens to peanuts unambiguously excluded anaphylactic peanut allergy. Children with anaphylaxis have higher levels of Ara h 1, 2, 3 and 9 IgE, but comparable levels of IgE to Ara h 8 and whole-peanut extract. The diagnostic specificity calculated with a receiver operating characteristic analysis reached 100% for CD-sens and 73% for Ara h 2. We demonstrated that severe peanut allergy is significantly associated with higher basophil allergen sensitivity. This cellular test should facilitate a more accurate diagnosis of peanut allergy. © 2013 S. Karger AG, Basel.