How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements
NASA Technical Reports Server (NTRS)
Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.
1977-01-01
A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
We present a multi-faceted sensitivity analysis of a spatially explicit, individual-based model (IBM) (HexSim) of a threatened species, the Northern Spotted Owl (Strix occidentalis caurina) on a national forest in Washington, USA. Few sensitivity analyses have been conducted on ...
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N
2018-02-01
Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.
The Intercultural Sensitivity of Chilean Teachers Serving an Immigrant Population in Schools
ERIC Educational Resources Information Center
Morales Mendoza, Karla; Sanhueza Henríquez, Susan; Friz Carrillo, Miguel; Riquelme Bravo, Paula
2017-01-01
The objective of this article is to evaluate the intercultural sensitivity of teachers working in culturally diverse classrooms, and to analyse differences in intercultural sensitivity based on the gender, age, training (advanced training courses), and intercultural experience of the teachers. A quantitative approach with a comparative descriptive…
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Individual differences in metacontrast masking regarding sensitivity and response bias.
Albrecht, Thorsten; Mattler, Uwe
2012-09-01
In metacontrast masking target visibility is modulated by the time until a masking stimulus appears. The effect of this temporal delay differs across participants in such a way that individual human observers' performance shows distinguishable types of masking functions which remain largely unchanged for months. Here we examined whether individual differences in masking functions depend on different response criteria in addition to differences in discrimination sensitivity. To this end we reanalyzed previously published data and conducted a new experiment for further data analyses. Our analyses demonstrate that a distinction of masking functions based on the type of masking stimulus is superior to a distinction based on the target-mask congruency. Individually different masking functions are based on individual differences in discrimination sensitivities and in response criteria. Results suggest that individual differences in metacontrast masking result from individually different criterion contents. Copyright © 2012 Elsevier Inc. All rights reserved.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
Effectiveness of Light Sources on In-Office Dental Bleaching: A Systematic Review and Meta-Analyses.
SoutoMaior, J R; de Moraes, Sld; Lemos, Caa; Vasconcelos, Bc do E; Montes, Majr; Pellizzer, E P
2018-06-12
A systematic review and meta-analyses were performed to evaluate the efficacy of tooth color change and sensitivity of teeth following in-office bleaching with and without light gel activation in adult patients. This review was registered at PROSPERO (CRD 42017060574) and is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Electronic systematic searches of PubMed/MEDLINE, Web of Science, and the Cochrane Library were conducted for published articles. Only randomized clinical trials among adults that compared in-office bleaching with and without light activation with the same bleaching gel concentrations were selected. The outcomes were tooth color change and tooth sensitivity prevalence and intensity. Twenty-three articles from 1054 data sources met the eligibility criteria. After title and abstract screening, 39 studies remained. Sixteen studies were further excluded. Twenty-three studies remained for qualitative analyses and 20 for meta-analyses of primary and secondary outcomes. No significant differences in tooth color change or tooth sensitivity incidence were found between the compared groups; however, tooth sensitivity intensity decreased when light sources were applied. The use of light sources for in-office bleaching is not imperative to achieve esthetic clinical results.
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Automated haematology analysis to diagnose malaria
2010-01-01
For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the method is applied. Future developments in new haematology analysers such as considerably simplified, robust and inexpensive devices for malaria detection fitted with an automatically generated alert could improve the detection capacity of these instruments and potentially expand their clinical utility in malaria diagnosis. PMID:21118557
The Negative Affect Hypothesis of Noise Sensitivity
Shepherd, Daniel; Heinonen-Guzejev, Marja; Heikkilä, Kauko; Dirks, Kim N.; Hautus, Michael J.; Welch, David; McBride, David
2015-01-01
Some studies indicate that noise sensitivity is explained by negative affect, a dispositional tendency to negatively evaluate situations and the self. Individuals high in such traits may report a greater sensitivity to other sensory stimuli, such as smell, bright light and pain. However, research investigating the relationship between noise sensitivity and sensitivity to stimuli associated with other sensory modalities has not always supported the notion of a common underlying trait, such as negative affect, driving them. Additionally, other explanations of noise sensitivity based on cognitive processes have existed in the clinical literature for over 50 years. Here, we report on secondary analyses of pre-existing laboratory (n = 74) and epidemiological (n = 1005) data focusing on the relationship between noise sensitivity to and annoyance with a variety of olfactory-related stimuli. In the first study a correlational design examined the relationships between noise sensitivity, noise annoyance, and perceptual ratings of 16 odors. The second study sought differences between mean noise and air pollution annoyance scores across noise sensitivity categories. Results from both analyses failed to support the notion that, by itself, negative affectivity explains sensitivity to noise. PMID:25993104
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
Yamamoto, F; Yamamoto, M
2004-07-01
We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.
Reduced size first-order subsonic and supersonic aeroelastic modeling
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Various aeroelastic, aeroservoelastic, dynamic-response, and sensitivity analyses are based on a time-domain first-order (state-space) formulation of the equations of motion. The formulation of this paper is based on the minimum-state (MS) aerodynamic approximation method, which yields a low number of aerodynamic augmenting states. Modifications of the MS and the physical weighting procedures make the modeling method even more attractive. The flexibility of constraint selection is increased without increasing the approximation problem size; the accuracy of dynamic residualization of high-frequency modes is improved; and the resulting model is less sensitive to parametric changes in subsequent analyses. Applications to subsonic and supersonic cases demonstrate the generality, flexibility, accuracy, and efficiency of the method.
The Sensitivity of Genetic Connectivity Measures to Unsampled and Under-Sampled Sites
Koen, Erin L.; Bowman, Jeff; Garroway, Colin J.; Wilson, Paul J.
2013-01-01
Landscape genetic analyses assess the influence of landscape structure on genetic differentiation. It is rarely possible to collect genetic samples from all individuals on the landscape and thus it is important to assess the sensitivity of landscape genetic analyses to the effects of unsampled and under-sampled sites. Network-based measures of genetic distance, such as conditional genetic distance (cGD), might be particularly sensitive to sampling intensity because pairwise estimates are relative to the entire network. We addressed this question by subsampling microsatellite data from two empirical datasets. We found that pairwise estimates of cGD were sensitive to both unsampled and under-sampled sites, and FST, Dest, and deucl were more sensitive to under-sampled than unsampled sites. We found that the rank order of cGD was also sensitive to unsampled and under-sampled sites, but not enough to affect the outcome of Mantel tests for isolation by distance. We simulated isolation by resistance and found that although cGD estimates were sensitive to unsampled sites, by increasing the number of sites sampled the accuracy of conclusions drawn from landscape genetic analyses increased, a feature that is not possible with pairwise estimates of genetic differentiation such as FST, Dest, and deucl. We suggest that users of cGD assess the sensitivity of this measure by subsampling within their own network and use caution when making extrapolations beyond their sampled network. PMID:23409155
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
NASA Astrophysics Data System (ADS)
Hou, Pei; Wu, Shiliang; McCarty, Jessica L.; Gao, Yang
2018-06-01
Wet deposition driven by precipitation is an important sink for atmospheric aerosols and soluble gases. We investigate the sensitivity of atmospheric aerosol lifetimes to precipitation intensity and frequency in the context of global climate change. Our sensitivity model simulations, through some simplified perturbations to precipitation in the GEOS-Chem model, show that the removal efficiency and hence the atmospheric lifetime of aerosols have significantly higher sensitivities to precipitation frequencies than to precipitation intensities, indicating that the same amount of precipitation may lead to different removal efficiencies of atmospheric aerosols. Combining the long-term trends of precipitation patterns for various regions with the sensitivities of atmospheric aerosol lifetimes to various precipitation characteristics allows us to examine the potential impacts of precipitation changes on atmospheric aerosols. Analyses based on an observational dataset show that precipitation frequencies in some regions have decreased in the past 14 years, which might increase the atmospheric aerosol lifetimes in those regions. Similar analyses based on multiple reanalysis meteorological datasets indicate that the changes of precipitation intensity and frequency over the past 30 years can lead to perturbations in the atmospheric aerosol lifetimes by 10 % or higher at the regional scale.
New insight on intergenerational attachment from a relationship-based analysis.
Bailey, Heidi N; Tarabulsy, George M; Moran, Greg; Pederson, David R; Bento, Sandi
2017-05-01
Research on attachment transmission has focused on variable-centered analyses, where hypotheses are tested by examining linear associations between variables. The purpose of this study was to apply a relationship-centered approach to data analysis, where adult states of mind, maternal sensitivity, and infant attachment were conceived as being three components of a single, intergenerational relationship. These variables were assessed in 90 adolescent and 99 adult mother-infant dyads when infants were 12 months old. Initial variable-centered analyses replicated the frequently observed associations between these three core attachment variables. Relationship-based, latent class analyses then revealed that the most common pattern among young mother dyads featured maternal unresolved trauma, insensitive interactive behavior, and disorganized infant attachment (61%), whereas the most prevalent adult mother dyad relationship pattern involved maternal autonomy, sensitive maternal behavior, and secure infant attachment (59%). Three less prevalent relationship patterns were also observed. Moderation analyses revealed that the adolescent-adult mother distinction differentiated between secure and disorganized intergenerational relationship patterns, whereas experience of traumatic events distinguished between disorganized and avoidant patterns. Finally, socioeconomic status distinguished between avoidant and secure patterns. Results emphasize the value of a relationship-based approach, adding an angle of understanding to the study of attachment transmission.
Report of the LSPI/NASA Workshop on Lunar Base Methodology Development
NASA Technical Reports Server (NTRS)
Nozette, Stewart; Roberts, Barney
1985-01-01
Groundwork was laid for computer models which will assist in the design of a manned lunar base. The models, herein described, will provide the following functions for the successful conclusion of that task: strategic planning; sensitivity analyses; impact analyses; and documentation. Topics addressed include: upper level model description; interrelationship matrix; user community; model features; model descriptions; system implementation; model management; and plans for future action.
Numerical modelling of distributed vibration sensor based on phase-sensitive OTDR
NASA Astrophysics Data System (ADS)
Masoudi, A.; Newson, T. P.
2017-04-01
A Distributed Vibration Sensor Based on Phase-Sensitive OTDR is numerically modeled. The advantage of modeling the building blocks of the sensor individually and combining the blocks to analyse the behavior of the sensing system is discussed. It is shown that the numerical model can accurately imitate the response of the experimental setup to dynamic perturbations a signal processing procedure similar to that used to extract the phase information from sensing setup.
Akrami, Mohammad; Qian, Zhihui; Zou, Zhemin; Howard, David; Nester, Chris J; Ren, Lei
2018-04-01
The objective of this study was to develop and validate a subject-specific framework for modelling the human foot. This was achieved by integrating medical image-based finite element modelling, individualised multi-body musculoskeletal modelling and 3D gait measurements. A 3D ankle-foot finite element model comprising all major foot structures was constructed based on MRI of one individual. A multi-body musculoskeletal model and 3D gait measurements for the same subject were used to define loading and boundary conditions. Sensitivity analyses were used to investigate the effects of key modelling parameters on model predictions. Prediction errors of average and peak plantar pressures were below 10% in all ten plantar regions at five key gait events with only one exception (lateral heel, in early stance, error of 14.44%). The sensitivity analyses results suggest that predictions of peak plantar pressures are moderately sensitive to material properties, ground reaction forces and muscle forces, and significantly sensitive to foot orientation. The maximum region-specific percentage change ratios (peak stress percentage change over parameter percentage change) were 1.935-2.258 for ground reaction forces, 1.528-2.727 for plantar flexor muscles and 4.84-11.37 for foot orientations. This strongly suggests that loading and boundary conditions need to be very carefully defined based on personalised measurement data.
NASA Astrophysics Data System (ADS)
Chen, Zhangwei; Wang, Xin; Giuliani, Finn; Atkinson, Alan
2015-01-01
Mechanical properties of porous SOFC electrodes are largely determined by their microstructures. Measurements of the elastic properties and microstructural parameters can be achieved by modelling of the digitally reconstructed 3D volumes based on the real electrode microstructures. However, the reliability of such measurements is greatly dependent on the processing of raw images acquired for reconstruction. In this work, the actual microstructures of La0.6Sr0.4Co0.2Fe0.8O3-δ (LSCF) cathodes sintered at an elevated temperature were reconstructed based on dual-beam FIB/SEM tomography. Key microstructural and elastic parameters were estimated and correlated. Analyses of their sensitivity to the grayscale threshold value applied in the image segmentation were performed. The important microstructural parameters included porosity, tortuosity, specific surface area, particle and pore size distributions, and inter-particle neck size distribution, which may have varying extent of effect on the elastic properties simulated from the microstructures using FEM. Results showed that different threshold value range would result in different degree of sensitivity for a specific parameter. The estimated porosity and tortuosity were more sensitive than surface area to volume ratio. Pore and neck size were found to be less sensitive than particle size. Results also showed that the modulus was essentially sensitive to the porosity which was largely controlled by the threshold value.
Nerpin, Elisabet; Risérus, Ulf; Ingelsson, Erik; Sundström, Johan; Jobs, Magnus; Larsson, Anders; Basu, Samar; Ärnlöv, Johan
2008-01-01
OBJECTIVE—To investigate the association between insulin sensitivity and glomerular filtration rate (GFR) in the community, with prespecified subgroup analyses in normoglycemic individuals with normal GFR. RESEARCH DESIGN AND METHODS—We investigated the cross-sectional association between insulin sensitivity (M/I, assessed using euglycemic clamp) and cystatin C–based GFR in a community-based cohort of elderly men (Uppsala Longitudinal Study of Adult Men [ULSAM], n = 1,070). We also investigated whether insulin sensitivity predicted the incidence of renal dysfunction at a follow-up examination after 7 years. RESULTS—Insulin sensitivity was directly related to GFR (multivariable-adjusted regression coefficient for 1-unit higher M/I 1.19 [95% CI 0.69–1.68]; P < 0.001) after adjusting for age, glucometabolic variables (fasting plasma glucose, fasting plasma insulin, and 2-h glucose after an oral glucose tolerance test), cardiovascular risk factors (hypertension, dyslipidemia, and smoking), and lifestyle factors (BMI, physical activity, and consumption of tea, coffee, and alcohol). The positive multivariable-adjusted association between insulin sensitivity and GFR also remained statistically significant in participants with normal fasting plasma glucose, normal glucose tolerance, and normal GFR (n = 443; P < 0.02). In longitudinal analyses, higher insulin sensitivity at baseline was associated with lower risk of impaired renal function (GFR <50 ml/min per 1.73 m2) during follow-up independently of glucometabolic variables (multivariable-adjusted odds ratio for 1-unit higher of M/I 0.58 [95% CI 0.40–0.84]; P < 0.004). CONCLUSIONS—Our data suggest that impaired insulin sensitivity may be involved in the development of renal dysfunction at an early stage, before the onset of diabetes or prediabetic glucose elevations. Further studies are needed in order to establish causality. PMID:18509205
Social Regulation of Leukocyte Homeostasis: The Role of Glucocorticoid Sensitivity
Cole, Steve W.
2010-01-01
Recent small-scale genomics analyses suggest that physiologic regulation of pro-inflammatory gene expression by endogenous glucocorticoids may be compromised in individuals who experience chronic social isolation. This could potentially contribute to the elevated prevalence of inflammation-related disease previously observed in social isolates. The present study assessed the relationship between leukocyte distributional sensitivity to glucocorticoid regulation and subjective social isolation in a large population-based sample of older adults. Initial analyses confirmed that circulating neutrophil percentages were elevated, and circulating lymphocyte and monocyte percentages were suppressed, in direct proportion to circulating cortisol levels. However, leukocyte distributional sensitivity to endogenous glucocorticoids was abrogated in individuals reporting either occasional or frequent experiences of subjective social isolation. This finding held in both nonparametric univariate analyses and in multivariate linear models controlling for a variety of biological, social, behavioral, and psychological confounders. The present results suggest that social factors may alter immune cell sensitivity to physiologic regulation by the hypothalamic-pituitary-adrenal axis in ways that could ultimately contribute to the increased physical health risks associated with social isolation. PMID:18394861
Non-allergic cutaneous reactions in airborne chemical sensitivity--a population based study.
Berg, Nikolaj Drimer; Linneberg, Allan; Thyssen, Jacob Pontoppidan; Dirksen, Asger; Elberling, Jesper
2011-06-01
Multiple chemical sensitivity (MCS) is characterised by adverse effects due to exposure to low levels of chemical substances. The aetiology is unknown, but chemical related respiratory symptoms have been found associated with positive patch test. The purpose of this study was to investigate the relationship between cutaneous reactions from patch testing and self-reported severity of chemical sensitivity to common airborne chemicals. A total of 3460 individuals participating in a general health examination, Health 2006, were patch tested with allergens from the European standard series and screened for chemical sensitivity with a standardised questionnaire dividing the participants into four severity groups of chemical sensitivity. Both allergic and non-allergic cutaneous reactions--defined as irritative, follicular, or doubtful allergic reactions--were analysed in relationship with severity of chemical sensitivity. Associations were controlled for the possible confounding effects of sex, age, asthma, eczema, atopic dermatitis, psychological and social factors, and smoking habits. In unadjusted analyses we found associations between allergic and non-allergic cutaneous reactions on patch testing and the two most severe groups of self-reported sensitivity to airborne chemicals. When adjusting for confounding, associations were weakened, and only non-allergic cutaneous reactions were significantly associated with individuals most severely affected by inhalation of airborne chemicals (odds ratio = 2.5, p = 0.006). Our results suggest that individuals with self-reported chemical sensitivity show increased non-allergic cutaneous reactions based on day 2 readings of patch tests. Copyright © 2011 Elsevier GmbH. All rights reserved.
Time to angiographic reperfusion in acute ischemic stroke: decision analysis.
Vagal, Achala S; Khatri, Pooja; Broderick, Joseph P; Tomsick, Thomas A; Yeatts, Sharon D; Eckman, Mark H
2014-12-01
Our objective was to use decision analytic modeling to compare 2 treatment strategies of intravenous recombinant tissue-type plasminogen activator (r-tPA) alone versus combined intravenous r-tPA/endovascular therapy in a subgroup of patients with large vessel (internal carotid artery terminus, M1, and M2) occlusion based on varying times to angiographic reperfusion and varying rates of reperfusion. We developed a decision model using Interventional Management of Stroke (IMS) III trial data and comprehensive literature review. We performed 1-way sensitivity analyses for time to reperfusion and 2-way sensitivity for time to reperfusion and rate of reperfusion success. We also performed probabilistic sensitivity analyses to address uncertainty in total time to reperfusion for the endovascular approach. In the base case, endovascular approach yielded a higher expected utility (6.38 quality-adjusted life years) than the intravenous-only arm (5.42 quality-adjusted life years). One-way sensitivity analyses demonstrated superiority of endovascular treatment to intravenous-only arm unless time to reperfusion exceeded 347 minutes. Two-way sensitivity analysis demonstrated that endovascular treatment was preferred when probability of reperfusion is high and time to reperfusion is small. Probabilistic sensitivity results demonstrated an average gain for endovascular therapy of 0.76 quality-adjusted life years (SD 0.82) compared with the intravenous-only approach. In our post hoc model with its underlying limitations, endovascular therapy after intravenous r-tPA is the preferred treatment as compared with intravenous r-tPA alone. However, if time to reperfusion exceeds 347 minutes, intravenous r-tPA alone is the recommended strategy. This warrants validation in a randomized, prospective trial among patients with large vessel occlusions. © 2014 American Heart Association, Inc.
Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher
2015-12-01
Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.
Digital data processing system dynamic loading analysis
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Tucker, A. E.
1976-01-01
Simulation and analysis of the Space Shuttle Orbiter Digital Data Processing System (DDPS) are reported. The mated flight and postseparation flight phases of the space shuttle's approach and landing test configuration were modeled utilizing the Information Management System Interpretative Model (IMSIM) in a computerized simulation modeling of the ALT hardware, software, and workload. System requirements simulated for the ALT configuration were defined. Sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and the sensitivity analyses, a test design is described for adapting, parameterizing, and executing the IMSIM. Varying load and stress conditions for the model execution are given. The analyses of the computer simulation runs were documented as results, conclusions, and recommendations for DDPS improvements.
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
Adjoint-Based Sensitivity and Uncertainty Analysis for Density and Composition: A User’s Guide
Favorite, Jeffrey A.; Perko, Zoltan; Kiedrowski, Brian C.; ...
2017-03-01
The ability to perform sensitivity analyses using adjoint-based first-order sensitivity theory has existed for decades. This paper provides guidance on how adjoint sensitivity methods can be used to predict the effect of material density and composition uncertainties in critical experiments, including when these uncertain parameters are correlated or constrained. Two widely used Monte Carlo codes, MCNP6 (Ref. 2) and SCALE 6.2 (Ref. 3), are both capable of computing isotopic density sensitivities in continuous energy and angle. Additionally, Perkó et al. have shown how individual isotope density sensitivities, easily computed using adjoint methods, can be combined to compute constrained first-order sensitivitiesmore » that may be used in the uncertainty analysis. This paper provides details on how the codes are used to compute first-order sensitivities and how the sensitivities are used in an uncertainty analysis. Constrained first-order sensitivities are computed in a simple example problem.« less
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
msgbsR: An R package for analysing methylation-sensitive restriction enzyme sequencing data.
Mayne, Benjamin T; Leemaqz, Shalem Y; Buckberry, Sam; Rodriguez Lopez, Carlos M; Roberts, Claire T; Bianco-Miotto, Tina; Breen, James
2018-02-01
Genotyping-by-sequencing (GBS) or restriction-site associated DNA marker sequencing (RAD-seq) is a practical and cost-effective method for analysing large genomes from high diversity species. This method of sequencing, coupled with methylation-sensitive enzymes (often referred to as methylation-sensitive restriction enzyme sequencing or MRE-seq), is an effective tool to study DNA methylation in parts of the genome that are inaccessible in other sequencing techniques or are not annotated in microarray technologies. Current software tools do not fulfil all methylation-sensitive restriction sequencing assays for determining differences in DNA methylation between samples. To fill this computational need, we present msgbsR, an R package that contains tools for the analysis of methylation-sensitive restriction enzyme sequencing experiments. msgbsR can be used to identify and quantify read counts at methylated sites directly from alignment files (BAM files) and enables verification of restriction enzyme cut sites with the correct recognition sequence of the individual enzyme. In addition, msgbsR assesses DNA methylation based on read coverage, similar to RNA sequencing experiments, rather than methylation proportion and is a useful tool in analysing differential methylation on large populations. The package is fully documented and available freely online as a Bioconductor package ( https://bioconductor.org/packages/release/bioc/html/msgbsR.html ).
Nshimyumukiza, Léon; Douville, Xavier; Fournier, Diane; Duplantie, Julie; Daher, Rana K; Charlebois, Isabelle; Longtin, Jean; Papenburg, Jesse; Guay, Maryse; Boissinot, Maurice; Bergeron, Michel G; Boudreau, Denis; Gagné, Christian; Rousseau, François; Reinharz, Daniel
2016-03-01
A point-of-care rapid test (POCRT) may help early and targeted use of antiviral drugs for the management of influenza A infection. (i) To determine whether antiviral treatment based on a POCRT for influenza A is cost-effective and, (ii) to determine the thresholds of key test parameters (sensitivity, specificity and cost) at which a POCRT based-strategy appears to be cost effective. An hybrid « susceptible, infected, recovered (SIR) » compartmental transmission and Markov decision analytic model was used to simulate the cost-effectiveness of antiviral treatment based on a POCRT for influenza A in the social perspective. Data input parameters used were retrieved from peer-review published studies and government databases. The outcome considered was the incremental cost per life-year saved for one seasonal influenza season. In the base-case analysis, the antiviral treatment based on POCRT saves 2 lives/100,000 person-years and costs $7600 less than the empirical antiviral treatment based on clinical judgment alone, which demonstrates that the POCRT-based strategy is dominant. In one and two way-sensitivity analyses, results were sensitive to the POCRT accuracy and cost, to the vaccination coverage as well as to the prevalence of influenza A. In probabilistic sensitivity analyses, the POCRT strategy is cost-effective in 66% of cases, for a commonly accepted threshold of $50,000 per life-year saved. The influenza antiviral treatment based on POCRT could be cost-effective in specific conditions of performance, price and disease prevalence. © 2015 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.
Cost-effectiveness of renin-guided treatment of hypertension.
Smith, Steven M; Campbell, Jonathan D
2013-11-01
A plasma renin activity (PRA)-guided strategy is more effective than standard care in treating hypertension (HTN). However, its clinical implementation has been slow, presumably due in part to economic concerns. We estimated the cost effectiveness of a PRA-guided treatment strategy compared with standard care in a treated but uncontrolled HTN population. We estimated costs, quality-adjusted life years (QALYs), and the incremental cost-effectiveness ratio (ICER) of PRA-guided therapy compared to standard care using a state-transition simulation model with alternate patient characteristic scenarios and sensitivity analyses. Patient-specific inputs for the base case scenario, males average age 63 years, reflected best available data from a recent clinical trial of PRA-guided therapy. Transition probabilities were estimated using Framingham risk equations or derived from the literature; costs and utilities were derived from the literature. In the base case scenario for males, the lifetime discounted costs and QALYs were $23,648 and 12.727 for PRA-guided therapy and $22,077 and 12.618 for standard care, respectively. The base case ICER was $14,497/QALY gained. In alternative scenario analyses varying patient input parameters, the results were sensitive to age, gender, baseline systolic blood pressure, and the addition of cardiovascular risk factors. Univariate sensitivity analyses demonstrated that results were most sensitive to varying the treatment effect of PRA-guided therapy and the cost of the PRA test. Our results suggest that PRA-guided therapy compared with standard care increases QALYs and medical costs in most scenarios. PRA-guided therapy appears to be most cost effective in younger persons and those with more cardiovascular risk factors. © American Journal of Hypertension, Ltd 2013. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
1986-08-01
most examinees. Therefore it appears psychometrically ac - ceptable for the CAT -ASVAB project to proceed without item recalibration based on...MEMORANDUM DETERMINING THE SENSITIVITY OF CAT -ASVAB SCORES TO CHANGES IN ITEM RESPONSE CURVES WITH THE MEDIUM OF ADMINISTRATION D. R. Divgi...Subj: Center for Naval Analyses Research Memorandum 86-189 End: (1) CNA Research Memorandum 86-189, "Determining the Sensitivity of CAT -ASVAB
Abstract Trichloroethylene (TCE) is an industrial chemical and an environmental contaminant. TCE and its metabolites may be carcinogenic and affect human health. Physiologically based pharmacokinetic (PBPK) models that differ in compartmentalization are developed for TCE metabo...
Mallinckrodt, C H; Lin, Q; Molenberghs, M
2013-01-01
The objective of this research was to demonstrate a framework for drawing inference from sensitivity analyses of incomplete longitudinal clinical trial data via a re-analysis of data from a confirmatory clinical trial in depression. A likelihood-based approach that assumed missing at random (MAR) was the primary analysis. Robustness to departure from MAR was assessed by comparing the primary result to those from a series of analyses that employed varying missing not at random (MNAR) assumptions (selection models, pattern mixture models and shared parameter models) and to MAR methods that used inclusive models. The key sensitivity analysis used multiple imputation assuming that after dropout the trajectory of drug-treated patients was that of placebo treated patients with a similar outcome history (placebo multiple imputation). This result was used as the worst reasonable case to define the lower limit of plausible values for the treatment contrast. The endpoint contrast from the primary analysis was - 2.79 (p = .013). In placebo multiple imputation, the result was - 2.17. Results from the other sensitivity analyses ranged from - 2.21 to - 3.87 and were symmetrically distributed around the primary result. Hence, no clear evidence of bias from missing not at random data was found. In the worst reasonable case scenario, the treatment effect was 80% of the magnitude of the primary result. Therefore, it was concluded that a treatment effect existed. The structured sensitivity framework of using a worst reasonable case result based on a controlled imputation approach with transparent and debatable assumptions supplemented a series of plausible alternative models under varying assumptions was useful in this specific situation and holds promise as a generally useful framework. Copyright © 2012 John Wiley & Sons, Ltd.
Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy
NASA Astrophysics Data System (ADS)
Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng
2018-06-01
To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
Mens, Petra F; Matelon, Raphael J; Nour, Bakri Y M; Newman, Dave M; Schallig, Henk D F H
2010-07-19
This study describes the laboratory evaluation of a novel diagnostic platform for malaria. The Magneto Optical Test (MOT) is based on the bio-physical detection of haemozoin in clinical samples. Having an assay time of around one minute, it offers the potential of high throughput screening. Blood samples of confirmed malaria patients from different regions of Africa, patients with other diseases and healthy non-endemic controls were used in the present study. The samples were analysed with two reference tests, i.e. an histidine rich protein-2 based rapid diagnostic test (RDT) and a conventional Pan-Plasmodium PCR, and the MOT as index test. Data were entered in 2 x 2 tables and analysed for sensitivity and specificity. The agreement between microscopy, RDT and PCR and the MOT assay was determined by calculating Kappa values with a 95% confidence interval. The observed sensitivity/specificity of the MOT test in comparison with clinical description, RDT or PCR ranged from 77.2 - 78.8% (sensitivity) and from 72.5 - 74.6% (specificity). In general, the agreement between MOT and the other assays is around 0.5 indicating a moderate agreement between the reference and the index test. However, when RDT and PCR are compared to each other, an almost perfect agreement can be observed (k = 0.97) with a sensitivity and specificity of >95%. Although MOT sensitivity and specificity are currently not yet at a competing level compared to other diagnostic test, such as PCR and RDTs, it has a potential to rapidly screen patients for malaria in endemic as well as non-endemic countries.
Validity of self-reported stroke in elderly African Americans, Caribbean Hispanics, and Whites.
Reitz, Christiane; Schupf, Nicole; Luchsinger, José A; Brickman, Adam M; Manly, Jennifer J; Andrews, Howard; Tang, Ming X; DeCarli, Charles; Brown, Truman R; Mayeux, Richard
2009-07-01
The validity of a self-reported stroke remains inconclusive. To validate the diagnosis of self-reported stroke using stroke identified by magnetic resonance imaging (MRI) as the standard. Community-based cohort study of nondemented, ethnically diverse elderly persons in northern Manhattan. High-resolution quantitative MRIs were acquired for 717 participants without dementia. Sensitivity and specificity of stroke by self-report were examined using cross-sectional analyses and the chi(2) test. Putative relationships between factors potentially influencing the reporting of stroke, including memory performance, cognitive function, and vascular risk factors, were assessed using logistic regression models. Subsequently, all analyses were repeated, stratified by age, sex, ethnic group, and level of education. In analyses of the whole sample, sensitivity of stroke self-report for a diagnosis of stroke on MRI was 32.4%, and specificity was 78.9%. In analyses stratified by median age (80.1 years), the validity between reported stroke and detection of stroke on MRI was significantly better in the younger than the older age group (for all vascular territories: sensitivity and specificity, 36.7% and 81.3% vs 27.6% and 26.2%; P = .02). Impaired memory, cognitive skills, or language ability and the presence of hypertension or myocardial infarction were associated with higher rates of false-negative results. Using brain MRI as the standard, specificity and sensitivity of stroke self-report are low. Accuracy of self-report is influenced by age, presence of vascular disease, and cognitive function. In stroke research, sensitive neuroimaging techniques rather than stroke self-report should be used to determine stroke history.
Using computer-based video analysis in the study of fidgety movements.
Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild
2009-09-01
Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.
NASA Astrophysics Data System (ADS)
Li, Qingyun; Karnowski, Karol; Villiger, Martin; Sampson, David D.
2017-04-01
A fibre-based full-range polarisation-sensitive optical coherence tomography system is developed to enable complete capture of the structural and birefringence properties of the anterior segment of the human eye in a single acquisition. The system uses a wavelength swept source centered at 1.3 μm, passively depth-encoded, orthogonal polarisation states in the illumination path and polarisation-diversity detection. Off-pivot galvanometer scanning is used to extend the imaging range and compensate for sensitivity drop-off. A Mueller matrix-based method is used to analyse data. We demonstrate the performance of the system and discuss issues relating to its optimisation.
A value-based medicine cost-utility analysis of idiopathic epiretinal membrane surgery.
Gupta, Omesh P; Brown, Gary C; Brown, Melissa M
2008-05-01
To perform a reference case, cost-utility analysis of epiretinal membrane (ERM) surgery using current literature on outcomes and complications. Computer-based, value-based medicine analysis. Decision analyses were performed under two scenarios: ERM surgery in better-seeing eye and ERM surgery in worse-seeing eye. The models applied long-term published data primarily from the Blue Mountains Eye Study and the Beaver Dam Eye Study. Visual acuity and major complications were derived from 25-gauge pars plana vitrectomy studies. Patient-based, time trade-off utility values, Markov modeling, sensitivity analysis, and net present value adjustments were used in the design and calculation of results. Main outcome measures included the number of discounted quality-adjusted-life-years (QALYs) gained and dollars spent per QALY gained. ERM surgery in the better-seeing eye compared with observation resulted in a mean gain of 0.755 discounted QALYs (3% annual rate) per patient treated. This model resulted in $4,680 per QALY for this procedure. When sensitivity analysis was performed, utility values varied from $6,245 to $3,746/QALY gained, medical costs varied from $3,510 to $5,850/QALY gained, and ERM recurrence rate increased to $5,524/QALY. ERM surgery in the worse-seeing eye compared with observation resulted in a mean gain of 0.27 discounted QALYs per patient treated. The $/QALY was $16,146 with a range of $20,183 to $12,110 based on sensitivity analyses. Utility values ranged from $21,520 to $12,916/QALY and ERM recurrence rate increased to $16,846/QALY based on sensitivity analysis. ERM surgery is a very cost-effective procedure when compared with other interventions across medical subspecialties.
A Methodological Review of US Budget-Impact Models for New Drugs.
Mauskopf, Josephine; Earnshaw, Stephanie
2016-11-01
A budget-impact analysis is required by many jurisdictions when adding a new drug to the formulary. However, previous reviews have indicated that adherence to methodological guidelines is variable. In this methodological review, we assess the extent to which US budget-impact analyses for new drugs use recommended practices. We describe recommended practice for seven key elements in the design of a budget-impact analysis. Targeted literature searches for US studies reporting estimates of the budget impact of a new drug were performed and we prepared a summary of how each study addressed the seven key elements. The primary finding from this review is that recommended practice is not followed in many budget-impact analyses. For example, we found that growth in the treated population size and/or changes in disease-related costs expected during the model time horizon for more effective treatments was not included in several analyses for chronic conditions. In addition, all drug-related costs were not captured in the majority of the models. Finally, for most studies, one-way sensitivity and scenario analyses were very limited, and the ranges used in one-way sensitivity analyses were frequently arbitrary percentages rather than being data driven. The conclusions from our review are that changes in population size, disease severity mix, and/or disease-related costs should be properly accounted for to avoid over- or underestimating the budget impact. Since each budget holder might have different perspectives and different values for many of the input parameters, it is also critical for published budget-impact analyses to include extensive sensitivity and scenario analyses based on realistic input values.
Vegter, Stefan; Boersma, Cornelis; Rozenbaum, Mark; Wilffert, Bob; Navis, Gerjan; Postma, Maarten J
2008-01-01
The fields of pharmacogenetics and pharmacogenomics have become important practical tools to progress goals in medical and pharmaceutical research and development. As more screening tests are being developed, with some already used in clinical practice, consideration of cost-effectiveness implications is important. A systematic review was performed on the content of and adherence to pharmacoeconomic guidelines of recent pharmacoeconomic analyses performed in the field of pharmacogenetics and pharmacogenomics. Economic analyses of screening strategies for genetic variations, which were evidence-based and assumed to be associated with drug efficacy or safety, were included in the review. The 20 papers included cover a variety of healthcare issues, including screening tests on several cytochrome P450 (CYP) enzyme genes, thiopurine S-methyltransferase (TMPT) and angiotensin-converting enzyme (ACE) insertion deletion (ACE I/D) polymorphisms. Most economic analyses reported that genetic screening was cost effective and often even clearly dominated existing non-screening strategies. However, we found a lack of standardization regarding aspects such as the perspective of the analysis, factors included in the sensitivity analysis and the applied discount rates. In particular, an important limitation of several studies related to the failure to provide a sufficient evidence-based rationale for an association between genotype and phenotype. Future economic analyses should be conducted utilizing correct methods, with adherence to guidelines and including extensive sensitivity analyses. Most importantly, genetic screening strategies should be based on good evidence-based rationales. For these goals, we provide a list of recommendations for good pharmacoeconomic practice deemed useful in the fields of pharmacogenetics and pharmacogenomics, regardless of country and origin of the economic analysis.
NASA Technical Reports Server (NTRS)
Eder, D.
1992-01-01
Parametric models were constructed for Earth-based laser powered electric orbit transfer from low Earth orbit to geosynchronous orbit. These models were used to carry out performance, cost/benefit, and sensitivity analyses of laser-powered transfer systems including end-to-end life cycle cost analyses for complete systems. Comparisons with conventional orbit transfer systems were made indicating large potential cost savings for laser-powered transfer. Approximate optimization was done to determine best parameter values for the systems. Orbit transfer flights simulations were conducted to explore effects of parameters not practical to model with a spreadsheet. The simulations considered view factors that determine when power can be transferred from ground stations to an orbit transfer vehicle and conducted sensitivity analyses for numbers of ground stations, Isp including dual-Isp transfers, and plane change profiles. Optimal steering laws were used for simultaneous altitude and plane change. Viewing geometry and low-thrust orbit raising were simultaneously simulated. A very preliminary investigation of relay mirrors was made.
Assessing the dependence of sensitivity and specificity on prevalence in meta-analysis
Li, Jialiang; Fine, Jason P.
2011-01-01
We consider modeling the dependence of sensitivity and specificity on the disease prevalence in diagnostic accuracy studies. Many meta-analyses compare test accuracy across studies and fail to incorporate the possible connection between the accuracy measures and the prevalence. We propose a Pearson type correlation coefficient and an estimating equation–based regression framework to help understand such a practical dependence. The results we derive may then be used to better interpret the results from meta-analyses. In the biomedical examples analyzed in this paper, the diagnostic accuracy of biomarkers are shown to be associated with prevalence, providing insights into the utility of these biomarkers in low- and high-prevalence populations. PMID:21525421
NASA Astrophysics Data System (ADS)
Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.
2018-04-01
Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.
Jozaghi, Ehsan; Jackson, Asheka
2015-01-01
Background: Research predicting the public health and fiscal impact of Supervised Injection Facilities (SIFs), across different cities in Canada, has reported positive results on the reduction of HIV cases among People Who Inject Drugs (PWID). Most of the existing studies have focused on the outcomes of Insite, located in the Vancouver Downtown Eastside (DTES). Previous attention has not been afforded to other affected areas of Canada. The current study seeks to address this deficiency by assessing the cost-effectiveness of opening a SIF in Saskatoon, Saskatchewan. Methods: We used two different mathematical models commonly used in the literature, including sensitivity analyses, to estimate the number of HIV infections averted due to the establishment of a SIF in the city of Saskatoon, Saskatchewan. Results: Based on cumulative cost-effectiveness results, SIF establishment is cost-effective. The benefit to cost ratio was conservatively estimated to be 1.35 for the first two potential facilities. The study relied on 34% and 14% needle sharing rates for sensitivity analyses. The result for both sensitivity analyses and the base line estimates indicated positive prospects for the establishment of a SIF in Saskatoon. Conclusion: The opening of a SIF in Saskatoon, Saskatchewan is financially prudent in the reduction of tax payers’ expenses and averting HIV infection rates among PWID PMID:26029896
Multianalyte biosensor based on pH-sensitive ZnO electrolyte–insulator–semiconductor structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haur Kao, Chyuan; Chun Liu, Che; Ueng, Herng-Yih
2014-05-14
Multianalyte electrolyte–insulator–semiconductor (EIS) sensors with a ZnO sensing membrane annealed on silicon substrate for use in pH sensing were fabricated. Material analyses were conducted using X-ray diffraction and atomic force microscopy to identify optimal treatment conditions. Sensing performance for various ions of Na{sup +}, K{sup +}, urea, and glucose was also tested. Results indicate that an EIS sensor with a ZnO membrane annealed at 600 °C exhibited good performance with high sensitivity and a low drift rate compared with all other reported ZnO-based pH sensors. Furthermore, based on well-established pH sensing properties, pH-ion-sensitive field-effect transistor sensors have also been developed formore » use in detecting urea and glucose ions. ZnO-based EIS sensors show promise for future industrial biosensing applications.« less
Lisiak, M; Kłyszejko, C; Marcinkowski, Z; Gwieździński, Z
2000-09-01
The purpose of the study was to analyses the sensitivity of 73 randomly selected Candida albicans strains isolated from the vagina of pregnant and delivering women against seven basic antimycotics. The microtest FUNGITEST (Sanofi Diagnostics Pasteur) was applied in assessing the sensitivity of 5-fluorocytosin, amphotericin B, ketoconazol, fluconazol, itraconazol and miconazol and the disk-diffusion method with the use of a Casitone base for nystatin. Variations in the sensitivity against drugs have been noted between individual strains of Candida albicans species. The largest number of strains was resistant against ketoconazol--56.16%, and only 10.96% was resistant against nystatin.
Lau, Brian C; Collins, Michael W; Lovell, Mark R
2011-06-01
Concussions affect an estimated 136 000 high school athletes yearly. Computerized neurocognitive testing has been shown to be appropriately sensitive and specific in diagnosing concussions, but no studies have assessed its utility to predict length of recovery. Determining prognosis during subacute recovery after sports concussion will help clinicians more confidently address return-to-play and academic decisions. To quantify the prognostic ability of computerized neurocognitive testing in combination with symptoms during the subacute recovery phase from sports-related concussion. Cohort study (prognosis); Level of evidence, 2. In sum, 108 male high school football athletes completed a computer-based neurocognitive test battery within 2.23 days of injury and were followed until returned to play as set by international guidelines. Athletes were grouped into protracted recovery (>14 days; n = 50) or short-recovery (≤14 days; n = 58). Separate discriminant function analyses were performed using total symptom score on Post-Concussion Symptom Scale, symptom clusters (migraine, cognitive, sleep, neuropsychiatric), and Immediate Postconcussion Assessment and Cognitive Testing neurocognitive scores (verbal memory, visual memory, reaction time, processing speed). Multiple discriminant function analyses revealed that the combination of 4 symptom clusters and 4 neurocognitive composite scores had the highest sensitivity (65.22%), specificity (80.36%), positive predictive value (73.17%), and negative predictive value (73.80%) in predicting protracted recovery. Discriminant function analyses of total symptoms on the Post-Concussion Symptom Scale alone had a sensitivity of 40.81%; specificity, 79.31%; positive predictive value, 62.50%; and negative predictive value, 61.33%. The 4 symptom clusters alone discriminant function analyses had a sensitivity of 46.94%; specificity, 77.20%; positive predictive value, 63.90%; and negative predictive value, 62.86%. Discriminant function analyses of the 4 computerized neurocognitive scores alone had a sensitivity of 53.20%; specificity, 75.44%; positive predictive value, 64.10%; and negative predictive value, 66.15%. The use of computerized neurocognitive testing in conjunction with symptom clusters results improves sensitivity, specificity, positive predictive value, and negative predictive value of predicting protracted recovery compared with each used alone. There is also a net increase in sensitivity of 24.41% when using neurocognitive testing and symptom clusters together compared with using total symptoms on Post-Concussion Symptom Scale alone.
Anderson, Craig A; Shibuya, Akiko; Ihori, Nobuko; Swing, Edward L; Bushman, Brad J; Sakamoto, Akira; Rothstein, Hannah R; Saleem, Muniba
2010-03-01
Meta-analytic procedures were used to test the effects of violent video games on aggressive behavior, aggressive cognition, aggressive affect, physiological arousal, empathy/desensitization, and prosocial behavior. Unique features of this meta-analytic review include (a) more restrictive methodological quality inclusion criteria than in past meta-analyses; (b) cross-cultural comparisons; (c) longitudinal studies for all outcomes except physiological arousal; (d) conservative statistical controls; (e) multiple moderator analyses; and (f) sensitivity analyses. Social-cognitive models and cultural differences between Japan and Western countries were used to generate theory-based predictions. Meta-analyses yielded significant effects for all 6 outcome variables. The pattern of results for different outcomes and research designs (experimental, cross-sectional, longitudinal) fit theoretical predictions well. The evidence strongly suggests that exposure to violent video games is a causal risk factor for increased aggressive behavior, aggressive cognition, and aggressive affect and for decreased empathy and prosocial behavior. Moderator analyses revealed significant research design effects, weak evidence of cultural differences in susceptibility and type of measurement effects, and no evidence of sex differences in susceptibility. Results of various sensitivity analyses revealed these effects to be robust, with little evidence of selection (publication) bias.
Missing data in trial‐based cost‐effectiveness analysis: An incomplete journey
Gomes, Manuel; Carpenter, James R.
2018-01-01
SUMMARY Cost‐effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial‐based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty‐two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost‐effectiveness data was 63% (interquartile range: 47%–81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing‐at‐random assumption. Further improvements are needed to address missing data in cost‐effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing‐at‐random assumption. PMID:29573044
NASA Astrophysics Data System (ADS)
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Instrumentation and Performance Analysis Plans for the HIFiRE Flight 2 Experiment
NASA Technical Reports Server (NTRS)
Gruber, Mark; Barhorst, Todd; Jackson, Kevin; Eklund, Dean; Hass, Neal; Storch, Andrea M.; Liu, Jiwen
2009-01-01
Supersonic combustion performance of a bi-component gaseous hydrocarbon fuel mixture is one of the primary aspects under investigation in the HIFiRE Flight 2 experiment. In-flight instrumentation and post-test analyses will be two key elements used to determine the combustion performance. Pre-flight computational fluid dynamics (CFD) analyses provide valuable information that can be used to optimize the placement of a constrained set of wall pressure instrumentation in the experiment. The simulations also allow pre-flight assessments of performance sensitivities leading to estimates of overall uncertainty in the determination of combustion efficiency. Based on the pre-flight CFD results, 128 wall pressure sensors have been located throughout the isolator/combustor flowpath to minimize the error in determining the wall pressure force at Mach 8 flight conditions. Also, sensitivity analyses show that mass capture and combustor exit stream thrust are the two primary contributors to uncertainty in combustion efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
Kee, C C; Jamaiyah, H; Geeta, A; Ali, Z Ahmad; Safiza, M N Noor; Suzana, S; Khor, G L; Rahmah, R; Jamalludin, A R; Sumarni, M G; Lim, K H; Faudzi, Y Ahmad; Amal, N M
2011-12-01
Generalised obesity and central obesity are risk factors for Type II diabetes mellitus and cardiovascular diseases. Waist circumference (WC) has been suggested as a single screening tool for identification of overweight or obese subjects in lieu of the body mass index (BMI) for weight management in public health program. Currently, the recommended waist circumference cut-off points of > or = 94cm for men and > or =80cm for women (waist action level 1) and > or = 102cm for men and > or = 88cm for women (waist action level 2) used for identification of overweight and obesity are based on studies in Caucasian populations. The objective of this study was to assess the sensitivity and specificity of the recommended waist action levels, and to determine optimal WC cut-off points for identification of overweight or obesity with central fat distribution based on BMI for Malaysian adults. Data from 32,773 subjects (14,982 men and 17,791 women) aged 18 and above who participated in the Third National Health Morbidity Survey in 2006 were analysed. Sensitivity and specificity of WC at waist action level 1 were 48.3% and 97.5% for men; and 84.2% and 80.6% for women when compared to the cut-off points based on BMI > or = 25kg/m2. At waist action level 2, sensitivity and specificity were 52.4% and 98.0% for men, and 79.2% and 85.4% for women when compared with the cut-off points based on BMI (> or = 30 kg/m2). Receiver operating characteristic analyses showed that the appropriatescreening cut-off points for WC to identify subjects with overweight (> or = 25kg/m2) was 86.0cm (sensitivity=83.6%, specificity=82.5%) for men, and 79.1cm (sensitivity=85.0%, specificity=79.5%) for women. Waist circumference cut-off points to identify obese subjects (BMI > or = 30 kg/m2) was 93.2cm (sensitivity=86.5%, specificity=85.7%) for men and 85.2cm (sensitivity=77.9%, specificity=78.0%) for women. Our findings demonstrated that the current recommended waist circumference cut-off points have low sensitivity for identification of overweight and obesity in men. We suggest that these newly identified cut-off points be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less
NASA Astrophysics Data System (ADS)
Park, J.; Yoo, K.
2013-12-01
For groundwater resource conservation, it is important to accurately assess groundwater pollution sensitivity or vulnerability. In this work, we attempted to use data mining approach to assess groundwater pollution vulnerability in a TCE (trichloroethylene) contaminated Korean industrial site. The conventional DRASTIC method failed to describe TCE sensitivity data with a poor correlation with hydrogeological properties. Among the different data mining methods such as Artificial Neural Network (ANN), Multiple Logistic Regression (MLR), Case Base Reasoning (CBR), and Decision Tree (DT), the accuracy and consistency of Decision Tree (DT) was the best. According to the following tree analyses with the optimal DT model, the failure of the conventional DRASTIC method in fitting with TCE sensitivity data may be due to the use of inaccurate weight values of hydrogeological parameters for the study site. These findings provide a proof of concept that DT based data mining approach can be used in predicting and rule induction of groundwater TCE sensitivity without pre-existing information on weights of hydrogeological properties.
Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors
2014-01-01
Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Herring, William; Pearson, Isobel; Purser, Molly; Nakhaipour, Hamid Reza; Haiderali, Amin; Wolowacz, Sorrel; Jayasundara, Kavisha
2016-01-01
Our objective was to estimate the cost effectiveness of ofatumumab plus chlorambucil (OChl) versus chlorambucil in patients with chronic lymphocytic leukaemia for whom fludarabine-based therapies are considered inappropriate from the perspective of the publicly funded healthcare system in Canada. A semi-Markov model (3-month cycle length) used survival curves to govern progression-free survival (PFS) and overall survival (OS). Efficacy and safety data and health-state utility values were estimated from the COMPLEMENT-1 trial. Post-progression treatment patterns were based on clinical guidelines, Canadian treatment practices and published literature. Total and incremental expected lifetime costs (in Canadian dollars [$Can], year 2013 values), life-years and quality-adjusted life-years (QALYs) were computed. Uncertainty was assessed via deterministic and probabilistic sensitivity analyses. The discounted lifetime health and economic outcomes estimated by the model showed that, compared with chlorambucil, first-line treatment with OChl led to an increase in QALYs (0.41) and total costs ($Can27,866) and to an incremental cost-effectiveness ratio (ICER) of $Can68,647 per QALY gained. In deterministic sensitivity analyses, the ICER was most sensitive to the modelling time horizon and to the extrapolation of OS treatment effects beyond the trial duration. In probabilistic sensitivity analysis, the probability of cost effectiveness at a willingness-to-pay threshold of $Can100,000 per QALY gained was 59 %. Base-case results indicated that improved overall response and PFS for OChl compared with chlorambucil translated to improved quality-adjusted life expectancy. Sensitivity analysis suggested that OChl is likely to be cost effective subject to uncertainty associated with the presence of any long-term OS benefit and the model time horizon.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Development of a portable petroleum by-products chemical sensor, phase 1 and 2 report.
DOT National Transportation Integrated Search
2006-07-31
We have proposed to tailor design nanoparticle based chemical sensors for the sensitive, selective and field portable analyses of soil samples for petroleum spill indicating hydrocarbons (such as benzene, toluene, ethyl-benzenes, xylenes, PCBs, trich...
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Lewis, Gregory F.; Furman, Senta A.; McCool, Martha F.; Porges, Stephen W.
2011-01-01
Three frequently used RSA metrics are investigated to document violations of assumptions for parametric analyses, moderation by respiration, influences of nonstationarity, and sensitivity to vagal blockade. Although all metrics are highly correlated, new findings illustrate that the metrics are noticeably different on the above dimensions. Only one method conforms to the assumptions for parametric analyses, is not moderated by respiration, is not influenced by nonstationarity, and reliably generates stronger effect sizes. Moreover, this method is also the most sensitive to vagal blockade. Specific features of this method may provide insights into improving the statistical characteristics of other commonly used RSA metrics. These data provide the evidence to question, based on statistical grounds, published reports using particular metrics of RSA. PMID:22138367
Collimore, Kelsey C; McCabe, Randi E; Carleton, R Nicholas; Asmundson, Gordon J G
2008-08-01
The present investigation examined the impact of anxiety sensitivity (AS) and media exposure on posttraumatic stress disorder (PTSD) symptoms. Reactions from 143 undergraduate students in Hamilton, Ontario were assessed in the Fall of 2003 to gather information on anxiety, media coverage, and PTSD symptoms related to exposure to a remote traumatic event (September 11th). Regression analyses revealed that the Anxiety Sensitivity Index (ASI; [Peterson, R. A., & Reiss, S. (1992). Anxiety Sensitivity Index manual, 2nd ed. Worthington, Ohio: International Diagnostic Systems]) and State-Trait Anxiety Inventory trait form (STAI-T; [Spielberger, C. D., Gorsuch, R. L., & Lushene, R. E. (1970). State-trait anxiety inventory. Palo Alto, California: Consulting Psychologists Press]) total scores were significant predictors of PTSD symptoms in general. The ASI total score was also a significant predictor of hyperarousal and avoidance symptoms. Subsequent analyses further demonstrated differential relationships based on subscales and symptom clusters. Specifically, media exposure and trait anxiety predicted hyperarousal and re-experiencing symptoms, whereas the ASI fear of somatic sensations subscale significantly predicted avoidance and overall PTSD symptoms. Implications and directions for future research are discussed.
A sediment graph model based on SCS-CN method
NASA Astrophysics Data System (ADS)
Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.
2008-01-01
SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529
Wilson, Koo; Hettle, Robert; Marbaix, Sophie; Diaz Cerezo, Silvia; Ines, Monica; Santoni, Laura; Annemans, Lieven; Prignot, Jacques; Lopez de Sa, Esteban
2012-10-01
An estimated 17.2% of patients continue to smoke following diagnosis of cardiovascular disease (CVD). To reduce the risk of further morbidity or mortality in cardiovascular patients, smoking cessation has been shown to reduce the risk of mortality by 36% and myocardial infarction by 32%. The objective of this study was to evaluate the long-term health and economic consequences of smoking cessation in patients with CVD. Results of a randomized clinical trial comparing varenicline plus counselling vs. placebo plus counselling were extrapolated using a Markov model to simulate the lifetime costs and health consequences of smoking cessation in patients with stable CVD. For the base case, we considered a payer's perspective including direct costs attributed to the healthcare provider, measuring cumulative life years (LY) and quality adjusted life (QALY) years as outcome measures. Secondary analyses were conducted from a societal perspective, evaluating lost productivity due to premature mortality. Sensitivity and subgroup analyses were also undertaken. Results were analysed for Belgium, Spain, Portugal, and Italy. Varenicline plus counselling was associated with a gain in LY and QALY across all countries; relative to placebo plus counselling. From a payer's perspective, incremental cost effectiveness ratios were € 6120 (Belgium), € 5151 (Spain), € 5357 (Portugal), and € 5433 (Italy) per QALY gained. From a societal perspective, varenicline in addition to counselling was less costly than placebo and counselling in all cases. Sensitivity analyses showed little sensitivity in outcomes to model assumptions or uncertainty in model parameters. Varenicline in addition to counselling is cost-effective compared to placebo and counselling in smokers with CVD.
Cost-Effectiveness of Diagnostic Strategies for Suspected Scaphoid Fractures.
Yin, Zhong-Gang; Zhang, Jian-Bing; Gong, Ke-Tong
2015-08-01
The aim of this study was to assess the cost effectiveness of multiple competing diagnostic strategies for suspected scaphoid fractures. With published data, the authors created a decision-tree model simulating the diagnosis of suspected scaphoid fractures. Clinical outcomes, costs, and cost effectiveness of immediate computed tomography (CT), day 3 magnetic resonance imaging (MRI), day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, week 2 radiographs-MRI, week 2 radiographs-bone scan, and immediate MRI were evaluated. The primary clinical outcome was the detection of scaphoid fractures. The authors adopted societal perspective, including both the costs of healthcare and the cost of lost productivity. The incremental cost-effectiveness ratio (ICER), which expresses the incremental cost per incremental scaphoid fracture detected using a strategy, was calculated to compare these diagnostic strategies. Base case analysis, 1-way sensitivity analyses, and "worst case scenario" and "best case scenario" sensitivity analyses were performed. In the base case, the average cost per scaphoid fracture detected with immediate CT was $2553. The ICER of immediate MRI and day 3 MRI compared with immediate CT was $7483 and $32,000 per scaphoid fracture detected, respectively. The ICER of week 2 radiographs-MRI was around $170,000. Day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, and week 2 radiographs-bone scan strategy were dominated or extendedly dominated by MRI strategies. The results were generally robust in multiple sensitivity analyses. Immediate CT and MRI were the most cost-effective strategies for diagnosing suspected scaphoid fractures. Economic and Decision Analyses Level II. See Instructions for Authors for a complete description of levels of evidence.
Chirakup, Suphachai; Chaiyakunapruk, Nathorn; Chaikledkeaw, Usa; Pongcharoensuk, Petcharat; Ongphiphadhanakul, Boonsong; Roze, Stephane; Valentine, William J; Palmer, Andrew J
2008-03-01
The national essential drug committee in Thailand suggested that only one of thiazolidinediones be included in hospital formulary but little was know about their cost-effectiveness values. This study aims to determine an incremental cost-effectiveness ratio of pioglitazone 45 mg compared with rosiglitazone 8 mg in uncontrolled type 2 diabetic patients receiving sulfonylureas and metformin in Thailand. A Markov diabetes model (Center for Outcome Research model) was used in this study. Baseline characteristics of patients were based on Thai diabetes registry project. Costs of diabetes were calculated mainly from Buddhachinaraj hospital. Nonspecific mortality rate and transition probabilities of death from renal replacement therapy were obtained from Thai sources. Clinical effectiveness of thiazolidinediones was retrieved from a meta-analysis. All analyses were based on the government hospital policymaker perspective. Both cost and outcomes were discounted with the rate of 3%. Base-case analyses were analyzed as incremental cost per quality-adjusted life year (QALY) gained. A series of sensitive analyses were performed. In base-case analysis, the pioglitazone group had a better clinical outcomes and higher lifetime costs. The incremental cost per QALY gained was 186,246 baht (US$ 5389). The acceptability curves showed that the probability of pioglitazone being cost-effective was 29% at the willingness to pay of one time of Thai gross domestic product per capita (GDP per capita). The effect of pioglitazone on %HbA1c decrease was the most sensitive to the final outcomes. Our findings showed that in type 2 diabetic patients who cannot control their blood glucose under the combination of sulfonylurea and metformin, the use of pioglitazone 45 mg fell in the cost-effective range recommended by World Health Organization (one to three times of GDP per capita) on average, compared to rosiglitazone 8 mg. Nevertheless, based on sensitivity analysis, its probability of being cost-effective was quite low. Hospital policymakers may consider our findings as part of information for the decision-making process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas W. Akers; Edwin A. Harvego
2012-08-01
This paper presents the results of a study to evaluate the feasibility of remotely detecting and quantifying fuel relocation from the core to the lower head, and to regions outside the reactor vessel primary containment of the Fukushima 1-3 reactors. The goals of this study were to determine measurement conditions and requirements, and to perform initial radiation transport sensitivity analyses for several potential measurement locations inside the reactor building. The radiation transport sensitivity analyses were performed based on reactor design information for boiling water reactors (BWRs) similar to the Fukushima reactors, ORIGEN2 analyses of 3-cycle BWR fuel inventories, and datamore » on previously molten fuel characteristics from TMI- 2. A 100 kg mass of previously molten fuel material located on the lower head of the reactor vessel was chosen as a fuel interrogation sensitivity target. Two measurement locations were chosen for the transport analyses, one inside the drywell and one outside the concrete biological shield surrounding the drywell. Results of these initial radiation transport analyses indicate that the 100 kg of previously molten fuel material may be detectable at the measurement location inside the drywell, but that it is highly unlikely that any amount of fuel material inside the RPV will be detectable from a location outside the concrete biological shield surrounding the drywell. Three additional fuel relocation scenarios were also analyzed to assess detection sensitivity for varying amount of relocated material in the lower head of the reactor vessel, in the control rods perpendicular to the detector system, and on the lower head of the drywell. Results of these analyses along with an assessment of background radiation effects and a discussion of measurement issues, such as the detector/collimator design, are included in the paper.« less
An Investigation of the Dynamic Response of a Seismically Stable Platform
1982-08-01
PAD. The controls on the -9system are of two types. A low frequency tilt control, with a 10 arc second sensitivity, 2-axis tiltmeter as sensor ...Inertial Sensors Structural Analysis Holloman AFB, NiM. Support to this effort includes structural analyses toward active servo frequency band. This report...controlled to maintain a null position of a sensitive height sensor . The 6-degree-of- freedom high frequency controls are based on seismometers as sensors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Mueller, Donald E.; Patton, Bruce W.
2016-08-31
Experiments are being planned at Research Centre Rež (RC Rež) to use the FLiBe (2 7LiF-BeF 2) salt from the Molten Salt Reactor Experiment (MSRE) to perform reactor physics measurements in the LR-0 low power nuclear reactor. These experiments are intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems utilizing FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL) is performing sensitivity/uncertainty (S/U) analysis of these planned experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy research and development. Themore » objective of these analyses is to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a status update on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. The S/U analyses will be used to inform design of FLiBe-based experiments using the salt from MSRE.« less
Analysis of DNA methylation in Arabidopsis thaliana based on methylation-sensitive AFLP markers.
Cervera, M T; Ruiz-García, L; Martínez-Zapater, J M
2002-12-01
AFLP analysis using restriction enzyme isoschizomers that differ in their sensitivity to methylation of their recognition sites has been used to analyse the methylation state of anonymous CCGG sequences in Arabidopsis thaliana. The technique was modified to improve the quality of fingerprints and to visualise larger numbers of scorable fragments. Sequencing of amplified fragments indicated that detection was generally associated with non-methylation of the cytosine to which the isoschizomer is sensitive. Comparison of EcoRI/ HpaII and EcoRI/ MspI patterns in different ecotypes revealed that 35-43% of CCGG sites were differentially digested by the isoschizomers. Interestingly, the pattern of digestion among different plants belonging to the same ecotype is highly conserved, with the rate of intra-ecotype methylation-sensitive polymorphisms being less than 1%. However, pairwise comparisons of methylation patterns between samples belonging to different ecotypes revealed differences in up to 34% of the methylation-sensitive polymorphisms. The lack of correlation between inter-ecotype similarity matrices based on methylation-insensitive or methylation-sensitive polymorphisms suggests that whatever the mechanisms regulating methylation may be, they are not related to nucleotide sequence variation.
NASA Astrophysics Data System (ADS)
Luo, Jiannan; Lu, Wenxi
2014-06-01
Sobol‧ sensitivity analyses based on different surrogates were performed on a trichloroethylene (TCE)-contaminated aquifer to assess the sensitivity of the design variables of remediation duration, surfactant concentration and injection rates at four wells to remediation efficiency First, the surrogate models of a multi-phase flow simulation model were constructed by applying radial basis function artificial neural network (RBFANN) and Kriging methods, and the two models were then compared. Based on the developed surrogate models, the Sobol‧ method was used to calculate the sensitivity indices of the design variables which affect the remediation efficiency. The coefficient of determination (R2) and the mean square error (MSE) of these two surrogate models demonstrated that both models had acceptable approximation accuracy, furthermore, the approximation accuracy of the Kriging model was slightly better than that of the RBFANN model. Sobol‧ sensitivity analysis results demonstrated that the remediation duration was the most important variable influencing remediation efficiency, followed by rates of injection at wells 1 and 3, while rates of injection at wells 2 and 4 and the surfactant concentration had negligible influence on remediation efficiency. In addition, high-order sensitivity indices were all smaller than 0.01, which indicates that interaction effects of these six factors were practically insignificant. The proposed Sobol‧ sensitivity analysis based on surrogate is an effective tool for calculating sensitivity indices, because it shows the relative contribution of the design variables (individuals and interactions) to the output performance variability with a limited number of runs of a computationally expensive simulation model. The sensitivity analysis results lay a foundation for the optimal groundwater remediation process optimization.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Fang, Xin-Yu; Li, Wen-Bo; Zhang, Chao-Fan; Huang, Zi-da; Zeng, Hui-Yi; Dong, Zheng; Zhang, Wen-Ming
2018-02-01
To explore the diagnostic efficiency of DNA-based and RNA-based quantitative polymerase chain reaction (qPCR) analyses for periprosthetic joint infection (PJI). To determine the detection limit of DNA-based and RNA-based qPCR in vitro, Staphylococcus aureus and Escherichia coli strains were added to sterile synovial fluid obtained from a patient with knee osteoarthritis. Serial dilutions of samples were analyzed by DNA-based and RNA-based qPCR. Clinically, patients who were suspected of having PJI and eventually underwent revision arthroplasty in our hospital from July 2014 to December 2016 were screened. Preoperative puncture or intraoperative collection was performed on patients who met the inclusion and exclusion criteria to obtain synovial fluid. DNA-based and RNA-based PCR analyses and culture were performed on each synovial fluid sample. The patients' demographic characteristics, medical history, and laboratory test results were recorded. The diagnostic efficiency of both PCR assays was compared with culture methods. The in vitro analysis demonstrated that DNA-based qPCR assay was highly sensitive, with the detection limit being 1200 colony forming units (CFU)/mL of S. aureus and 3200 CFU/mL of E. coli. Meanwhile, The RNA-based qPCR assay could detect 2300 CFU/mL of S. aureus and 11 000 CFU/mL of E. coli. Clinically, the sensitivity, specificity, and accuracy were 65.7%, 100%, and 81.6%, respectively, for the culture method; 81.5%, 84.8%, and 83.1%, respectively, for DNA-based qPCR; and 73.6%, 100%, and 85.9%, respectively, for RNA-based qPCR. DNA-based qPCR could detect suspected PJI with high sensitivity after antibiotic therapy. RNA-based qPCR could reduce the false positive rates of DNA-based assays. qPCR-based methods could improve the efficiency of PJI diagnosis. © 2018 Chinese Orthopaedic Association and John Wiley & Sons Australia, Ltd.
Maintaining gender sensitivity in the family practice: facilitators and barriers.
Celik, Halime; Lagro-Janssen, Toine; Klinge, Ineke; van der Weijden, Trudy; Widdershoven, Guy
2009-12-01
This study aims to identify the facilitators and barriers perceived by General Practitioners (GPs) to maintain a gender perspective in family practice. Nine semi-structured interviews were conducted among nine pairs of GPs. The data were analysed by means of deductive content analysis using theory-based methods to generate facilitators and barriers to gender sensitivity. Gender sensitivity in family practice can be influenced by several factors which ultimately determine the extent to which a gender sensitive approach is satisfactorily practiced by GPs in the doctor-patient relationship. Gender awareness, repetition and reminders, motivation triggers and professional guidelines were found to facilitate gender sensitivity. On the other hand, lacking skills and routines, scepticism, heavy workload and the timing of implementation were found to be barriers to gender sensitivity. While the potential effect of each factor affecting gender sensitivity in family practice has been elucidated, the effects of the interplay between these factors still need to be determined.
Patel, Anik R; Kessler, Jason; Braithwaite, R Scott; Nucifora, Kimberly A; Thirumurthy, Harsha; Zhou, Qinlian; Lester, Richard T; Marra, Carlo A
2017-02-01
A surge in mobile phone availability has fueled low cost short messaging service (SMS) adherence interventions. Multiple systematic reviews have concluded that some SMS-based interventions are effective at improving antiretroviral therapy (ART) adherence, and they are hypothesized to improve retention in care. The objective of this study was to evaluate the cost-effectiveness of SMS-based adherence interventions and explore the added value of retention benefits. We evaluated the cost-effectiveness of weekly SMS interventions compared to standard care among HIV+ individuals initiating ART for the first time in Kenya. We used an individual level micro-simulation model populated with data from two SMS-intervention trials, an East-African HIV+ cohort and published literature. We estimated average quality adjusted life years (QALY) and lifetime HIV-related costs from a healthcare perspective. We explored a wide range of scenarios and assumptions in one-way and multivariate sensitivity analyses. We found that SMS-based adherence interventions were cost-effective by WHO standards, with an incremental cost-effectiveness ratio (ICER) of $1,037/QALY. In the secondary analysis, potential retention benefits improved the cost-effectiveness of SMS intervention (ICER = $864/QALY). In multivariate sensitivity analyses, the interventions remained cost-effective in most analyses, but the ICER was highly sensitive to intervention costs, effectiveness and average cohort CD4 count at ART initiation. SMS interventions remained cost-effective in a test and treat scenario where individuals were assumed to initiate ART upon HIV detection. Effective SMS interventions would likely increase the efficiency of ART programs by improving HIV treatment outcomes at relatively low costs, and they could facilitate achievement of the UNAIDS goal of 90% viral suppression among those on ART by 2020.
Racial and ethnic differences in experimental pain sensitivity: systematic review and meta-analysis.
Kim, Hee Jun; Yang, Gee Su; Greenspan, Joel D; Downton, Katherine D; Griffith, Kathleen A; Renn, Cynthia L; Johantgen, Meg; Dorsey, Susan G
2017-02-01
Our objective was to describe the racial and ethnic differences in experimental pain sensitivity. Four databases (PubMed, EMBASE, the Cochrane Central Register of Controlled Trials, and PsycINFO) were searched for studies examining racial/ethnic differences in experimental pain sensitivity. Thermal-heat, cold-pressor, pressure, ischemic, mechanical cutaneous, electrical, and chemical experimental pain modalities were assessed. Risk of bias was assessed using the Agency for Healthcare Research and Quality guideline. Meta-analysis was used to calculate standardized mean differences (SMDs) by pain sensitivity measures. Studies comparing African Americans (AAs) and non-Hispanic whites (NHWs) were included for meta-analyses because of high heterogeneity in other racial/ethnic group comparisons. Statistical heterogeneity was assessed by subgroup analyses by sex, sample size, sample characteristics, and pain modalities. A total of 41 studies met the review criteria. Overall, AAs, Asians, and Hispanics had higher pain sensitivity compared with NHWs, particularly lower pain tolerance, higher pain ratings, and greater temporal summation of pain. Meta-analyses revealed that AAs had lower pain tolerance (SMD: -0.90, 95% confidence intervals [CIs]: -1.10 to -0.70) and higher pain ratings (SMD: 0.50, 95% CI: 0.30-0.69) but no significant differences in pain threshold (SMD: -0.06, 95% CI: -0.23 to 0.10) compared with NHWs. Estimates did not vary by pain modalities, nor by other demographic factors; however, SMDs were significantly different based on the sample size. Racial/ethnic differences in experimental pain sensitivity were more pronounced with suprathreshold than with threshold stimuli, which is important in clinical pain treatment. Additional studies examining mechanisms to explain such differences in pain tolerance and pain ratings are needed.
Garcia-Aymerich, J; Benet, M; Saeys, Y; Pinart, M; Basagaña, X; Smit, H A; Siroux, V; Just, J; Momas, I; Rancière, F; Keil, T; Hohmann, C; Lau, S; Wahn, U; Heinrich, J; Tischer, C G; Fantini, M P; Lenzi, J; Porta, D; Koppelman, G H; Postma, D S; Berdel, D; Koletzko, S; Kerkhof, M; Gehring, U; Wickman, M; Melén, E; Hallberg, J; Bindslev-Jensen, C; Eller, E; Kull, I; Lødrup Carlsen, K C; Carlsen, K-H; Lambrecht, B N; Kogevinas, M; Sunyer, J; Kauffmann, F; Bousquet, J; Antó, J M
2015-08-01
Asthma, rhinitis and eczema often co-occur in children, but their interrelationships at the population level have been poorly addressed. We assessed co-occurrence of childhood asthma, rhinitis and eczema using unsupervised statistical techniques. We included 17 209 children at 4 years and 14 585 at 8 years from seven European population-based birth cohorts (MeDALL project). At each age period, children were grouped, using partitioning cluster analysis, according to the distribution of 23 variables covering symptoms 'ever' and 'in the last 12 months', doctor diagnosis, age of onset and treatments of asthma, rhinitis and eczema; immunoglobulin E sensitization; weight; and height. We tested the sensitivity of our estimates to subject and variable selections, and to different statistical approaches, including latent class analysis and self-organizing maps. Two groups were identified as the optimal way to cluster the data at both age periods and in all sensitivity analyses. The first (reference) group at 4 and 8 years (including 70% and 79% of children, respectively) was characterized by a low prevalence of symptoms and sensitization, whereas the second (symptomatic) group exhibited more frequent symptoms and sensitization. Ninety-nine percentage of children with comorbidities (co-occurrence of asthma, rhinitis and/or eczema) were included in the symptomatic group at both ages. The children's characteristics in both groups were consistent in all sensitivity analyses. At 4 and 8 years, at the population level, asthma, rhinitis and eczema can be classified together as an allergic comorbidity cluster. Future research including time-repeated assessments and biological data will help understanding the interrelationships between these diseases. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Zhao, Quan-Liang; He, Guang-Ping; Di, Jie-Jian; Song, Wei-Li; Hou, Zhi-Ling; Tan, Pei-Pei; Wang, Da-Wei; Cao, Mao-Sheng
2017-07-26
A flexible semitransparent energy harvester is assembled based on laterally aligned Pb(Zr 0.52 Ti 0.48 )O 3 (PZT) single-crystal nanowires (NWs). Such a harvester presents the highest open-circuit voltage and a stable area power density of up to 10 V and 0.27 μW/cm 2 , respectively. A high pressure sensitivity of 0.14 V/kPa is obtained in the dynamic pressure sensing, much larger than the values reported in other energy harvesters based on piezoelectric single-crystal NWs. Furthermore, theoretical and finite element analyses also confirm that the piezoelectric voltage constant g 33 of PZT NWs is competitive to the lead-based bulk single crystals and ceramics, and the enhanced pressure sensitivity and power density are substantially linked to the flexible structure with laterally aligned PZT NWs. The energy harvester in this work holds great potential in flexible and transparent sensing and self-powered systems.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Missing data in trial-based cost-effectiveness analysis: An incomplete journey.
Leurent, Baptiste; Gomes, Manuel; Carpenter, James R
2018-06-01
Cost-effectiveness analyses (CEA) conducted alongside randomised trials provide key evidence for informing healthcare decision making, but missing data pose substantive challenges. Recently, there have been a number of developments in methods and guidelines addressing missing data in trials. However, it is unclear whether these developments have permeated CEA practice. This paper critically reviews the extent of and methods used to address missing data in recently published trial-based CEA. Issues of the Health Technology Assessment journal from 2013 to 2015 were searched. Fifty-two eligible studies were identified. Missing data were very common; the median proportion of trial participants with complete cost-effectiveness data was 63% (interquartile range: 47%-81%). The most common approach for the primary analysis was to restrict analysis to those with complete data (43%), followed by multiple imputation (30%). Half of the studies conducted some sort of sensitivity analyses, but only 2 (4%) considered possible departures from the missing-at-random assumption. Further improvements are needed to address missing data in cost-effectiveness analyses conducted alongside randomised trials. These should focus on limiting the extent of missing data, choosing an appropriate method for the primary analysis that is valid under contextually plausible assumptions, and conducting sensitivity analyses to departures from the missing-at-random assumption. © 2018 The Authors Health Economics published by John Wiley & Sons Ltd.
Microfluidics-based digital quantitative PCR for single-cell small RNA quantification.
Yu, Tian; Tang, Chong; Zhang, Ying; Zhang, Ruirui; Yan, Wei
2017-09-01
Quantitative analyses of small RNAs at the single-cell level have been challenging because of limited sensitivity and specificity of conventional real-time quantitative PCR methods. A digital quantitative PCR (dqPCR) method for miRNA quantification has been developed, but it requires the use of proprietary stem-loop primers and only applies to miRNA quantification. Here, we report a microfluidics-based dqPCR (mdqPCR) method, which takes advantage of the Fluidigm BioMark HD system for both template partition and the subsequent high-throughput dqPCR. Our mdqPCR method demonstrated excellent sensitivity and reproducibility suitable for quantitative analyses of not only miRNAs but also all other small RNA species at the single-cell level. Using this method, we discovered that each sperm has a unique miRNA profile. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.
Kepes, Sven; McDaniel, Michael A
2015-01-01
Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance
2015-01-01
Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553
Lambert, Rod
2015-01-01
This article presents an evidence-based reasoning, focusing on evidence of an Occupational Therapy input to lifestyle behaviour influences on panic disorder that also provides potentially broader application across other mental health problems (MHP). The article begins from the premise that we are all different. It then follows through a sequence of questions, examining incrementally how MHPs are experienced and classified. It analyses the impact of individual sensitivity at different levels of analysis, from genetic and epigenetic individuality, through neurotransmitter and body system sensitivity. Examples are given demonstrating the evidence base behind the logical sequence of investigation. The paper considers the evidence of how everyday routine lifestyle behaviour impacts on occupational function at all levels, and how these behaviours link to individual sensitivity to influence the level of exposure required to elicit symptomatic responses. Occupational Therapists can help patients by adequately assessing individual sensitivity, and through promoting understanding and a sense of control over their own symptoms. It concludes that present clinical guidelines should be expanded to incorporate knowledge of individual sensitivities to environmental exposures and lifestyle behaviours at an early stage. PMID:26095868
Stark, Renee G; John, Jürgen; Leidl, Reiner
2011-01-13
This study's aim was to develop a first quantification of the frequency and costs of adverse drug events (ADEs) originating in ambulatory medical practice in Germany. The frequencies and costs of ADEs were quantified for a base case, building on an existing cost-of-illness model for ADEs. The model originates from the U.S. health care system, its structure of treatment probabilities linked to ADEs was transferred to Germany. Sensitivity analyses based on values determined from a literature review were used to test the postulated results. For Germany, the base case postulated that about 2 million adults ingesting medications have will have an ADE in 2007. Health care costs related to ADEs in this base case totalled 816 million Euros, mean costs per case were 381 Euros. About 58% of costs resulted from hospitalisations, 11% from emergency department visits and 21% from long-term care. Base case estimates of frequency and costs of ADEs were lower than all estimates of the sensitivity analyses. The postulated frequency and costs of ADEs illustrate the possible size of the health problems and economic burden related to ADEs in Germany. The validity of the U.S. treatment structure used remains to be determined for Germany. The sensitivity analysis used assumptions from different studies and thus further quantified the information gap in Germany regarding ADEs. This study found costs of ADEs in the ambulatory setting in Germany to be significant. Due to data scarcity, results are only a rough indication.
Statistical Performances of Resistive Active Power Splitter
NASA Astrophysics Data System (ADS)
Lalléchère, Sébastien; Ravelo, Blaise; Thakur, Atul
2016-03-01
In this paper, the synthesis and sensitivity analysis of an active power splitter (PWS) is proposed. It is based on the active cell composed of a Field Effect Transistor in cascade with shunted resistor at the input and the output (resistive amplifier topology). The PWS uncertainty versus resistance tolerances is suggested by using stochastic method. Furthermore, with the proposed topology, we can control easily the device gain while varying a resistance. This provides useful tool to analyse the statistical sensitivity of the system in uncertain environment.
Satoshi Hirabayashi; Chuck Kroll; David Nowak
2011-01-01
The Urban Forest Effects-Deposition model (UFORE-D) was developed with a component-based modeling approach. Functions of the model were separated into components that are responsible for user interface, data input/output, and core model functions. Taking advantage of the component-based approach, three UFORE-D applications were developed: a base application to estimate...
Long term real-time GB_InSAR monitoring of a large rock slide
NASA Astrophysics Data System (ADS)
Crosta, G. B.; Agliardi, F.; Sosio, R.; Rivolta, C.; Mannucci, G.
2011-12-01
We analyze a long term monitoring dataset collected for a deep-seated rockslide (Ruinon, Lombardy, Italy). The rockslide has been actively monitored since 1997 by means of an in situ monitoring network (topographic benchmarks, GPS, wire extensometers) and since 2006 by a ground based radar. Monitoring data have been used to set-up and update the geological model, to identify rockslide extent and geometry, to analyse the sensitivity to seasonal changes and their impact on the reliability and early warning potential of monitoring data. GB-InSAR data allowed us to identify sectors characterized by different behaviours and associated to outcropping bedrock, thick debris cover, major structures. GB-Insar data have been used to set-up a "virtual monitoring network" by a posteriori selection of critical locations. Displacement time series extracted from GB-InSAR data provide a large amount of information even in debris-covered areas, when ground-based instrumentation fails. Such spatially-distributed, improved information, validated by selected ground-based measurements, allowed to establish new velocity and displacement thresholds for early warning purposes. The data are analysed to verify the dependency of the observed displacements on the line of sight orientation as well as on that of the framed resolution cell. Relationships with rainfall and morphological slope characteristics have been analysed to verify the sensitivity to rain intensity and amount and to distinguish among the different possible mechanisms.
Responding to nonwords in the lexical decision task: Insights from the English Lexicon Project.
Yap, Melvin J; Sibley, Daragh E; Balota, David A; Ratcliff, Roger; Rueckl, Jay
2015-05-01
Researchers have extensively documented how various statistical properties of words (e.g., word frequency) influence lexical processing. However, the impact of lexical variables on nonword decision-making performance is less clear. This gap is surprising, because a better specification of the mechanisms driving nonword responses may provide valuable insights into early lexical processes. In the present study, item-level and participant-level analyses were conducted on the trial-level lexical decision data for almost 37,000 nonwords in the English Lexicon Project in order to identify the influence of different psycholinguistic variables on nonword lexical decision performance and to explore individual differences in how participants respond to nonwords. Item-level regression analyses reveal that nonword response time was positively correlated with number of letters, number of orthographic neighbors, number of affixes, and base-word number of syllables, and negatively correlated with Levenshtein orthographic distance and base-word frequency. Participant-level analyses also point to within- and between-session stability in nonword responses across distinct sets of items, and intriguingly reveal that higher vocabulary knowledge is associated with less sensitivity to some dimensions (e.g., number of letters) but more sensitivity to others (e.g., base-word frequency). The present findings provide well-specified and interesting new constraints for informing models of word recognition and lexical decision. (c) 2015 APA, all rights reserved).
Tenenbaum, Rachel B; Musser, Erica D; Raiker, Joseph S; Coles, Erika K; Gnagy, Elizabeth M; Pelham, William E
2018-07-01
Attention-deficit/hyperactivity disorder (ADHD) is associated with disruptionsin reward sensitivity and regulatory processes. However, it is unclear whether thesedisruptions are better explained by comorbid disruptive behavior disorder (DBD)symptomology. This study sought to examine this question using multiple levels ofanalysis (i.e., behavior, autonomic reactivity). One hundred seventeen children (aged 6 to 12 years; 72.6% male; 69 with ADHD) completed theBalloon-Analogue Risk Task (BART) to assess external reward sensitivity behaviorally.Sympathetic-based internal reward sensitivity and parasympathetic-based regulationwere indexed via cardiac pre-ejection period (PEP) and respiratory sinus arrhythmia(RSA), respectively. Children with ADHD exhibited reduced internal reward sensitivity (i.e.,lengthened PEP; F(1,112)=4.01, p=0.047) compared to healthy controls and werecharacterized by greater parasympathetic-based dysregulation (i.e., reduced RSAaugmentation F(1,112)=10.12, p=0.002). However, follow-up analyses indicated theADHD effect was better accounted for by comorbid DBD diagnoses; that is, childrenwith ADHD and comorbid ODD were characterized by reduced internal rewardsensitivity (i.e., lengthened PEP; t=2.47, p=0.046) and by parasympathetic-baseddysregulation (i.e., reduced RSA augmentation; t=3.51, p=0.002) in response to rewardwhen compared to typically developing youth. Furthermore, children with ADHD and comorbid CD exhibited greater behaviorally-based external reward sensitivity (i.e.,more total pops; F(3,110)= 5.96, p=0.001) compared to children with ADHD only (t=3.87, p=0.001) and children with ADHD and ODD (t=3.56, p=0.003). Results suggest that disruptions in sensitivity to reward may be betteraccounted for, in part, by comorbid DBD.Key Words: attention-deficit/hyperactivity disorder, autonomic nervous system,disruptive behavior disorders, reward sensitivityPowered.
Biomolecule detection based on Si single-electron transistors for practical use
NASA Astrophysics Data System (ADS)
Nakajima, Anri; Kudo, Takashi; Furuse, Sadaharu
2013-07-01
Experimental and theoretical analyses demonstrated that ultra-sensitive biomolecule detection can be achieved using a Si single-electron transistor (SET). A multi-island channel structure was used to enable room-temperature operation. Coulomb oscillation increases transconductance without increasing channel width, which increases detection sensitivity to a charged target. A biotin-modified SET biosensor was used to detect streptavidin at a dilute concentration. In addition, an antibody-functionalized SET biosensor was used for immunodetection of prostate-specific antigen, demonstrating its suitability for practical use. The feasibility of ultra-sensitive detection of biomolecules for practical use by using a SET biosensor was clearly proven through this systematic study.
Kao, Chyuan-Haur; Chang, Chia Lung; Su, Wei Ming; Chen, Yu Tzu; Lu, Chien Cheng; Lee, Yu Shan; Hong, Chen Hao; Lin, Chan-Yu; Chen, Hsiang
2017-08-03
Magnesium oxide (MgO) sensing membranes in pH-sensitive electrolyte-insulator-semiconductor structures were fabricated on silicon substrate. To optimize the sensing capability of the membrane, CF 4 plasma was incorporated to improve the material quality of MgO films. Multiple material analyses including FESEM, XRD, AFM, and SIMS indicate that plasma treatment might enhance the crystallization and increase the grain size. Therefore, the sensing behaviors in terms of sensitivity, linearity, hysteresis effects, and drift rates might be improved. MgO-based EIS membranes with CF 4 plasma treatment show promise for future industrial biosensing applications.
Vachhani, Raj; Patel, Toral; Centor, Robert M; Estrada, Carlos A
2017-01-01
Meta-analyses based on peer-reviewed publications report a sensitivity of approximately 85% for rapid antigen streptococcus tests to diagnose group A streptococcal (GAS) pharyngitis. Because these meta-analyses excluded package inserts, we examined the test characteristics of rapid antigen streptococcal tests and molecular methods that manufacturers report in their package inserts. We included tests available in the US market (Food and Drug Administration, period searched 1993-2015) and used package insert data to calculate pooled sensitivity and specificity. To examine quality, we used the Quality Assessment of Diagnostic Accuracy Studies-2. We excluded 26 tests having different trade names but identical methods and data. The study design was prospective in 41.7% (10 of 24). The pooled sensitivity of the most commonly used method, lateral flow/immunochromatographic, was 95% (95% confidence interval [CI] 94-96) and the pooled specificity was 98% (96-98); 7108 patients. The pooled sensitivity of the polymerase chain reaction or molecular methods was 98% (95% CI 96-98) and the pooled specificity was 96% (95% CI 95-97); 5685 patients. Package inserts include sponsored studies that overestimate the sensitivity of rapid tests to diagnose GAS pharyngitis by approximately 10%. Physicians should understand that package inserts overestimate diagnostic test utility; a negative test cannot be used to exclude GAS pharyngitis.
Analyses of procyanidins in foods using Diol phase HPLC
USDA-ARS?s Scientific Manuscript database
Separation of procyanidins using silica-based HPLC suffered from poor resolution for higher oligomers and low sensitivity due to the fluorescence quenching effects of methylene chloride in the mobile phase. Optimization of a published Diol-phase HPLC method resulted in near baseline separation for p...
Li, Xin; Kaattari, Stephen L; Vogelbein, Mary A; Vadas, George G; Unger, Michael A
2016-03-01
Immunoassays based on monoclonal antibodies (mAbs) are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs) and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC 50 values between 1.68-31 μg/L (ppb). 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3-5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC-MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples.
NASA Astrophysics Data System (ADS)
Kumar, Rajeev; Kushwaha, Angad S.; Srivastava, Monika; Mishra, H.; Srivastava, S. K.
2018-03-01
In the present communication, a highly sensitive surface plasmon resonance (SPR) biosensor with Kretschmann configuration having alternate layers, prism/zinc oxide/silver/gold/graphene/biomolecules (ss-DNA) is presented. The optimization of the proposed configuration has been accomplished by keeping the constant thickness of zinc oxide (32 nm), silver (32 nm), graphene (0.34 nm) layer and biomolecules (100 nm) for different values of gold layer thickness (1, 3 and 5 nm). The sensitivity of the proposed SPR biosensor has been demonstrated for a number of design parameters such as gold layer thickness, number of graphene layer, refractive index of biomolecules and the thickness of biomolecules layer. SPR biosensor with optimized geometry has greater sensitivity (66 deg/RIU) than the conventional (52 deg/RIU) as well as other graphene-based (53.2 deg/RIU) SPR biosensor. The effect of zinc oxide layer thickness on the sensitivity of SPR biosensor has also been analysed. From the analysis, it is found that the sensitivity increases significantly by increasing the thickness of zinc oxide layer. It means zinc oxide intermediate layer plays an important role to improve the sensitivity of the biosensor. The sensitivity of SPR biosensor also increases by increasing the number of graphene layer (upto nine layer).
Investigation of Navier-Stokes Code Verification and Design Optimization
NASA Technical Reports Server (NTRS)
Vaidyanathan, Rajkumar
2004-01-01
With rapid progress made in employing computational techniques for various complex Navier-Stokes fluid flow problems, design optimization problems traditionally based on empirical formulations and experiments are now being addressed with the aid of computational fluid dynamics (CFD). To be able to carry out an effective CFD-based optimization study, it is essential that the uncertainty and appropriate confidence limits of the CFD solutions be quantified over the chosen design space. The present dissertation investigates the issues related to code verification, surrogate model-based optimization and sensitivity evaluation. For Navier-Stokes (NS) CFD code verification a least square extrapolation (LSE) method is assessed. This method projects numerically computed NS solutions from multiple, coarser base grids onto a freer grid and improves solution accuracy by minimizing the residual of the discretized NS equations over the projected grid. In this dissertation, the finite volume (FV) formulation is focused on. The interplay between the xi concepts and the outcome of LSE, and the effects of solution gradients and singularities, nonlinear physics, and coupling of flow variables on the effectiveness of LSE are investigated. A CFD-based design optimization of a single element liquid rocket injector is conducted with surrogate models developed using response surface methodology (RSM) based on CFD solutions. The computational model consists of the NS equations, finite rate chemistry, and the k-6 turbulence closure. With the aid of these surrogate models, sensitivity and trade-off analyses are carried out for the injector design whose geometry (hydrogen flow angle, hydrogen and oxygen flow areas and oxygen post tip thickness) is optimized to attain desirable goals in performance (combustion length) and life/survivability (the maximum temperatures on the oxidizer post tip and injector face and a combustion chamber wall temperature). A preliminary multi-objective optimization study is carried out using a geometric mean approach. Following this, sensitivity analyses with the aid of variance-based non-parametric approach and partial correlation coefficients are conducted using data available from surrogate models of the objectives and the multi-objective optima to identify the contribution of the design variables to the objective variability and to analyze the variability of the design variables and the objectives. In summary the present dissertation offers insight into an improved coarse to fine grid extrapolation technique for Navier-Stokes computations and also suggests tools for a designer to conduct design optimization study and related sensitivity analyses for a given design problem.
Papageorgiou, Spyridon N; Konstantinidis, Ioannis; Papadopoulou, Konstantina; Jäger, Andreas; Bourauel, Christoph
2014-06-01
Fixed-appliance treatment is a major part of orthodontic treatment, but clinical evidence remains scarce. Objective of this systematic review was to investigate how the therapeutic effects and side-effects of brackets used during the fixed-appliance orthodontic treatment are affected by their characteristics. SEARCH METHODS AND SELECTION CRITERIA: We searched MEDLINE and 18 other databases through April 2012 without restrictions for randomized controlled trials and quasi-randomized controlled trials investigating any bracket characteristic. After duplicate selection and extraction procedures, risk of bias was assessed also in duplicate according to Cochrane guidelines and quality of evidence according to the Grades of Recommendation. Assessment, Development and Evaluation approach. Random-effects meta-analyses, subgroup analyses, and sensitivity analyses were performed with the corresponding 95 per cent confidence intervals (CI) and 95 per cent prediction intervals (PI). We included 25 trials on 1321 patients, with most comparing self-ligated (SL) and conventional brackets. Based on the meta-analyses, the duration of orthodontic treatment was on average 2.01 months longer among patients with SL brackets (95 per cent CI: 0.45 to 3.57). The 95 per cent PIs for a future trial indicated that the difference could be considerable (-1.46 to 5.47 months). Treatment characteristics, outcomes, and side-effects were clinically similar between SL and conventional brackets. For most bracket characteristics, evidence is insufficient. Some meta-analyses included trials with high risk of bias, but sensitivity analyses indicated robustness. Based on existing evidence, no clinical recommendation can be made regarding the bracket material or different ligation modules. For SL brackets, no conclusive benefits could be proven, while their use was associated with longer treatment durations.
Rohan, Vinayak S; Taber, David J; Moussa, Omar; Pilch, Nicole A; Denmark, Signe; Meadows, Holly B; McGillicuddy, John W; Chavin, Kenneth D; Baliga, Prabhakar K; Bratton, Charles F
2017-02-01
Elevated panel reactive antibody levels have been traditionally associated with increased acute rejection rate and decreased long-term graft survival after kidney transplant. In this study, our objective was to determine patient and allograft outcomes in sensitized kidney transplant recipients with advanced HLA antibody detection and stringent protein sequence epitope analyses. This was a subanalysis of a prospective, risk-stratified randomized controlled trial that compared interleukin 2 receptor antagonist to rabbit antithymocyte globulin induction in 200 kidney transplant recipients, examining outcomes based on panel reactive antibody levels of < 20% (low) versus ≥ 20% (high, sensitized). The study was conducted between February 2009 and July 2011. All patients underwent solid-phase single antigen bead assays to detect HLA antibodies and stringent HLA epitope analyses with protein sequence alignment for virtual crossmatching. Delayed graft function, acute rejection rates, and graft loss were the main outcomes measured. Both the low (134 patients) and high (66 patients) panel reactive antibody level cohorts had equivalent induction and maintenance immunosuppression. Patients in the high-level group were more likely to be female (P < .001), African American (P < .001), and received a kidney from a deceased donor (P = .004). Acute rejection rates were similar between the low (rate of 8%) and high (rate of 9%) panel reactive antibody groups (P = .783). Delayed graft function, borderline rejection, graft loss, and death were not different between groups. Multivariate analyses demonstrated delayed graft function to be the strongest predictor of acute rejection (odds ratio, 5.7; P = .005); panel reactive antibody level, as a continuous variable, had no significant correlation with acute rejection (C statistic, 0.48; P = .771). Appropriate biologic matching with single antigen bead assays and stringent epitope analyses provided excellent outcomes in sensitized patients regardless of the induction therapy choice.
Panchal, Mitesh B; Upadhyay, Sanjay H
2014-09-01
The unprecedented dynamic characteristics of nanoelectromechanical systems make them suitable for nanoscale mass sensing applications. Owing to superior biocompatibility, boron nitride nanotubes (BNNTs) are being increasingly used for such applications. In this study, the feasibility of single walled BNNT (SWBNNT)-based bio-sensor has been explored. Molecular structural mechanics-based finite element (FE) modelling approach has been used to analyse the dynamic behaviour of SWBNNT-based biosensors. The application of an SWBNNT-based mass sensing for zeptogram level of mass has been reported. Also, the effect of size of the nanotube in terms of length as well as different chiral atomic structures of SWBNNT has been analysed for their sensitivity analysis. The vibrational behaviour of SWBNNT has been analysed for higher-order modes of vibrations to identify the intermediate landing position of biological object of zeptogram scale. The present molecular structural mechanics-based FE modelling approach is found to be very effectual to incorporate different chiralities of the atomic structures. Also, different boundary conditions can be effectively simulated using the present approach to analyse the dynamic behaviour of the SWBNNT-based mass sensor. The presented study has explored the potential of SWBNNT, as a nanobiosensor having the capability of zeptogram level mass sensing.
NASA Astrophysics Data System (ADS)
Fani, Andrea; Camarri, Simone; Galletti, Chiara; Salvetti, Maria Vittoria
2012-11-01
The recent research in micro-fluidics has focused on the development of efficient passive micromixers, in which mixing is promoted without the help of any external power. One among the simplest designs of a passive micromixer is a T shape, in which the inlets join the main channel with T-shaped branches. The range of Reynolds numbers, Re , of interest for practical applications is such that the flow inside such a mixer is laminar but it is characterized by peculiar fluid-dynamics instabilities, which significantly enhance mixing but are poorly investigated in the literature. As Re is increased, the flow goes through a bifurcation which drives the system from a perfectly symmetric flow to a steady but asymmetric state, so enhancing mixing (engulfment regime). The onset of the engulfment has been found to be influenced by geometrical parameters and by inflow conditions. In the present work we characterize the engulfment instability by a global stability analysis on the 3D base flow in a T-mixer. Sensitivity analyses with respect to a structural perturbation of the linearized flow equations and to a base flow modification were carried out. Finally, we characterize the sensitivity of the considered instability with respect to a perturbation of the inlet velocity profile.
twzPEA: A Topology and Working Zone Based Pathway Enrichment Analysis Framework
USDA-ARS?s Scientific Manuscript database
Sensitive detection of involvement and adaptation of key signaling, regulatory, and metabolic pathways holds the key to deciphering molecular mechanisms such as those in the biomass-to-biofuel conversion process in yeast. Typical gene set enrichment analyses often do not use topology information in...
EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS
The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...
Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.
Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim
2016-04-01
Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.
Kim, H; Rajagopalan, M S; Beriwal, S; Smith, K J
2017-10-01
Stereotactic radiosurgery (SRS) alone or upfront whole brain radiation therapy (WBRT) plus SRS are the most commonly used treatment options for one to three brain oligometastases. The most recent randomised clinical trial result comparing SRS alone with upfront WBRT plus SRS (NCCTG N0574) has favoured SRS alone for neurocognitive function, whereas treatment options remain controversial in terms of cognitive decline and local control. The aim of this study was to conduct a cost-effectiveness analysis of these two competing treatments. A Markov model was constructed for patients treated with SRS alone or SRS plus upfront WBRT based on largely randomised clinical trials. Costs were based on 2016 Medicare reimbursement. Strategies were compared using the incremental cost-effectiveness ratio (ICER) and effectiveness was measured in quality-adjusted life years (QALYs). One-way and probabilistic sensitivity analyses were carried out. Strategies were evaluated from the healthcare payer's perspective with a willingness-to-pay threshold of $100 000 per QALY gained. In the base case analysis, the median survival was 9 months for both arms. SRS alone resulted in an ICER of $9917 per QALY gained. In one-way sensitivity analyses, results were most sensitive to variation in cognitive decline rates for both groups and median survival rates, but the SRS alone remained cost-effective for most parameter ranges. Based on the current available evidence, SRS alone was found to be cost-effective for patients with one to three brain metastases compared with upfront WBRT plus SRS. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Does catastrophic thinking enhance oesophageal pain sensitivity? An experimental investigation.
Martel, M O; Olesen, A E; Jørgensen, D; Nielsen, L M; Brock, C; Edwards, R R; Drewes, A M
2016-09-01
Gastro-oesophageal reflux disease (GORD) is a major health problem that is frequently accompanied by debilitating oesophageal pain symptoms. The first objective of the study was to examine the association between catastrophizing and oesophageal pain sensitivity. The second objective was to examine whether catastrophizing was associated with the magnitude of acid-induced oesophageal sensitization. Twenty-five healthy volunteers (median age: 24.0 years; range: 22-31) were recruited and were asked to complete the Pain Catastrophizing Scale (PCS). During two subsequent study visits, mechanical, thermal, and electrical pain sensitivity in the oesophagus was assessed before and after inducing oesophageal sensitization using a 30-min intraluminal oesophageal acid perfusion procedure. Analyses were conducted based on data averaged across the two study visits. At baseline, catastrophizing was significantly associated with mechanical (r = -0.42, p < 0.05) and electrical (r = -0.60, p < 0.01) pain thresholds. After acid perfusion, catastrophizing was also significantly associated with mechanical (r = -0.58, p < 0.01) and electrical (r = -0.50, p < 0.05) pain thresholds. Catastrophizing was not significantly associated with thermal pain thresholds. Subsequent analyses revealed that catastrophizing was not significantly associated with the magnitude of acid-induced oesophageal sensitization. Taken together, findings from the present study suggest that catastrophic thinking exerts an influence on oesophageal pain sensitivity, but not necessarily on the magnitude of acid-induced oesophageal sensitization. WHAT DOES THIS STUDY ADD?: Catastrophizing is associated with heightened pain sensitivity in the oesophagus. This was substantiated by assessing responses to noxious stimulation of the oesophagus using an experimental paradigm mimicking features and symptoms experienced by patients with gastro-oesophageal reflux disease (GORD). © 2016 European Pain Federation - EFIC®
A serum protein-based algorithm for the detection of Alzheimer disease.
O'Bryant, Sid E; Xiao, Guanghua; Barber, Robert; Reisch, Joan; Doody, Rachelle; Fairchild, Thomas; Adams, Perrie; Waring, Steven; Diaz-Arrastia, Ramon
2010-09-01
To develop an algorithm that separates patients with Alzheimer disease (AD) from controls. Longitudinal case-control study. The Texas Alzheimer's Research Consortium project. Patients We analyzed serum protein-based multiplex biomarker data from 197 patients diagnosed with AD and 203 controls. Main Outcome Measure The total sample was randomized equally into training and test sets and random forest methods were applied to the training set to create a biomarker risk score. The biomarker risk score had a sensitivity and specificity of 0.80 and 0.91, respectively, and an area under the curve of 0.91 in detecting AD. When age, sex, education, and APOE status were added to the algorithm, the sensitivity, specificity, and area under the curve were 0.94, 0.84, and 0.95, respectively. These initial data suggest that serum protein-based biomarkers can be combined with clinical information to accurately classify AD. A disproportionate number of inflammatory and vascular markers were weighted most heavily in the analyses. Additionally, these markers consistently distinguished cases from controls in significant analysis of microarray, logistic regression, and Wilcoxon analyses, suggesting the existence of an inflammatory-related endophenotype of AD that may provide targeted therapeutic opportunities for this subset of patients.
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data.
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-12-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree.
Phylogenetic study of Class Armophorea (Alveolata, Ciliophora) based on 18S-rDNA data
da Silva Paiva, Thiago; do Nascimento Borges, Bárbara; da Silva-Neto, Inácio Domingos
2013-01-01
The 18S rDNA phylogeny of Class Armophorea, a group of anaerobic ciliates, is proposed based on an analysis of 44 sequences (out of 195) retrieved from the NCBI/GenBank database. Emphasis was placed on the use of two nucleotide alignment criteria that involved variation in the gap-opening and gap-extension parameters and the use of rRNA secondary structure to orientate multiple-alignment. A sensitivity analysis of 76 data sets was run to assess the effect of variations in indel parameters on tree topologies. Bayesian inference, maximum likelihood and maximum parsimony phylogenetic analyses were used to explore how different analytic frameworks influenced the resulting hypotheses. A sensitivity analysis revealed that the relationships among higher taxa of the Intramacronucleata were dependent upon how indels were determined during multiple-alignment of nucleotides. The phylogenetic analyses rejected the monophyly of the Armophorea most of the time and consistently indicated that the Metopidae and Nyctotheridae were related to the Litostomatea. There was no consensus on the placement of the Caenomorphidae, which could be a sister group of the Metopidae + Nyctorheridae, or could have diverged at the base of the Spirotrichea branch or the Intramacronucleata tree. PMID:24385862
Morselli, Lisa L; Gamazon, Eric R; Tasali, Esra; Cox, Nancy J; Van Cauter, Eve; Davis, Lea K
2018-01-01
Over the past 20 years, a large body of experimental and epidemiologic evidence has linked sleep duration and quality to glucose homeostasis, although the mechanistic pathways remain unclear. The aim of the current study was to determine whether genetic variation influencing both sleep and glucose regulation could underlie their functional relationship. We hypothesized that the genetic regulation of electroencephalographic (EEG) activity during non-rapid eye movement sleep, a highly heritable trait with fingerprint reproducibility, is correlated with the genetic control of metabolic traits including insulin sensitivity and β-cell function. We tested our hypotheses through univariate and bivariate heritability analyses in a three-generation pedigree with in-depth phenotyping of both sleep EEG and metabolic traits in 48 family members. Our analyses accounted for age, sex, adiposity, and the use of psychoactive medications. In univariate analyses, we found significant heritability for measures of fasting insulin sensitivity and β-cell function, for time spent in slow-wave sleep, and for EEG spectral power in the delta, theta, and sigma ranges. Bivariate heritability analyses provided the first evidence for a shared genetic control of brain activity during deep sleep and fasting insulin secretion rate. © 2017 by the American Diabetes Association.
Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki
2015-02-18
Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices.
Techno-Economic Evaluation of Biodiesel Production from Waste Cooking Oil—A Case Study of Hong Kong
Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki
2015-01-01
Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices. PMID:25809602
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
Health economics and outcomes methods in risk-based decision-making for blood safety.
Custer, Brian; Janssen, Mart P
2015-08-01
Analytical methods appropriate for health economic assessments of transfusion safety interventions have not previously been described in ways that facilitate their use. Within the context of risk-based decision-making (RBDM), health economics can be important for optimizing decisions among competing interventions. The objective of this review is to address key considerations and limitations of current methods as they apply to blood safety. Because a voluntary blood supply is an example of a public good, analyses should be conducted from the societal perspective when possible. Two primary study designs are recommended for most blood safety intervention assessments: budget impact analysis (BIA), which measures the cost to implement an intervention both to the blood operator but also in a broader context, and cost-utility analysis (CUA), which measures the ratio between costs and health gain achieved, in terms of reduced morbidity and mortality, by use of an intervention. These analyses often have important limitations because data that reflect specific aspects, for example, blood recipient population characteristics or complication rates, are not available. Sensitivity analyses play an important role. The impact of various uncertain factors can be studied conjointly in probabilistic sensitivity analyses. The use of BIA and CUA together provides a comprehensive assessment of the costs and benefits from implementing (or not) specific interventions. RBDM is multifaceted and impacts a broad spectrum of stakeholders. Gathering and analyzing health economic evidence as part of the RBDM process enhances the quality, completeness, and transparency of decision-making. © 2015 AABB.
Investigating the Group-Level Impact of Advanced Dual-Echo fMRI Combinations
Kettinger, Ádám; Hill, Christopher; Vidnyánszky, Zoltán; Windischberger, Christian; Nagy, Zoltán
2016-01-01
Multi-echo fMRI data acquisition has been widely investigated and suggested to optimize sensitivity for detecting the BOLD signal. Several methods have also been proposed for the combination of data with different echo times. The aim of the present study was to investigate whether these advanced echo combination methods provide advantages over the simple averaging of echoes when state-of-the-art group-level random-effect analyses are performed. Both resting-state and task-based dual-echo fMRI data were collected from 27 healthy adult individuals (14 male, mean age = 25.75 years) using standard echo-planar acquisition methods at 3T. Both resting-state and task-based data were subjected to a standard image pre-processing pipeline. Subsequently the two echoes were combined as a weighted average, using four different strategies for calculating the weights: (1) simple arithmetic averaging, (2) BOLD sensitivity weighting, (3) temporal-signal-to-noise ratio weighting and (4) temporal BOLD sensitivity weighting. Our results clearly show that the simple averaging of data with the different echoes is sufficient. Advanced echo combination methods may provide advantages on a single-subject level but when considering random-effects group level statistics they provide no benefit regarding sensitivity (i.e., group-level t-values) compared to the simple echo-averaging approach. One possible reason for the lack of clear advantages may be that apart from increasing the average BOLD sensitivity at the single-subject level, the advanced weighted averaging methods also inflate the inter-subject variance. As the echo combination methods provide very similar results, the recommendation is to choose between them depending on the availability of time for collecting additional resting-state data or whether subject-level or group-level analyses are planned. PMID:28018165
Hattis, Dale; Goble, Robert; Chu, Margaret
2005-01-01
In an earlier report we developed a quantitative likelihood-based analysis of the differences in sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth to weaning, and weaning to 60 days) relative to exposures in adult life. Here we draw implications for assessing human risks for full lifetime exposures, taking into account three types of uncertainties in making projections from the rodent data: uncertainty in the central estimates of the life-stage–specific sensitivity factors estimated earlier, uncertainty from chemical-to-chemical differences in life-stage–specific sensitivities for carcinogenesis, and uncertainty in the mapping of rodent life stages to human ages/exposure periods. Among the uncertainties analyzed, the mapping of rodent life stages to human ages/exposure periods is most important quantitatively (a range of several-fold in estimates of the duration of the human equivalent of the highest sensitivity “birth to weaning” period in rodents). The combined effects of these uncertainties are estimated with Monte Carlo analyses. Overall, the estimated population arithmetic mean risk from lifetime exposures at a constant milligrams per kilogram body weight level to a generic mutagenic carcinogen is about 2.8-fold larger than expected from adult-only exposure with 5–95% confidence limits of 1.5-to 6-fold. The mean estimates for the 0- to 2-year and 2- to 15-year periods are about 35–55% larger than the 10- and 3-fold sensitivity factor adjustments recently proposed by the U.S. Environmental Protection Agency. The present results are based on data for only nine chemicals, including five mutagens. Risk inferences will be altered as data become available for other chemicals. PMID:15811844
Fox, Annie B; Hamilton, Alison B; Frayne, Susan M; Wiltsey-Stirman, Shannon; Bean-Mayberry, Bevanne; Carney, Diane; Di Leone, Brooke A L; Gierisch, Jennifer M; Goldstein, Karen M; Romodan, Yasmin; Sadler, Anne G; Yano, Elizabeth M; Yee, Ellen F; Vogt, Dawne
2016-01-01
Although providing culturally sensitive health care is vitally important, there is little consensus regarding the most effective strategy for implementing cultural competence trainings in the health care setting. Evidence-based quality improvement (EBQI), which involves adapting evidence-based practices to meet local needs, may improve uptake and effectiveness of a variety of health care innovations. Yet, to our knowledge, EBQI has not yet been applied to cultural competence training. To evaluate whether EBQI could enhance the impact of an evidence-based training intended to improve veterans affairs health care staff gender sensitivity and knowledge (Caring for Women Veterans; CWV), we compared the reach and effectiveness of EBQI delivery versus standard web-based implementation strategies of CWV and assessed barriers and facilitators to EBQI implementation. Workgroups at four diverse veterans affairs health care sites were randomized to either an EBQI or standard web-based implementation condition (SI). All EBQI sites selected a group-based implementation strategy. Employees (N = 84) completed pretraining and posttraining assessments of gender sensitivity and knowledge, and focus groups/interviews were conducted with leadership and staff before and after implementation. Reach of CWV was greater in the EBQI condition versus the SI condition. Whereas both gender sensitivity and knowledge improved in the EBQI condition, only gender sensitivity improved in the SI condition. Qualitative analyses revealed that the EBQI approach was well received, although a number of barriers were identified. Findings suggest that EBQI can enhance the uptake and effectiveness of employee trainings. However, the decision to pursue EBQI must be informed by a consideration of available resources.
A framework for improving a seasonal hydrological forecasting system using sensitivity analysis
NASA Astrophysics Data System (ADS)
Arnal, Louise; Pappenberger, Florian; Smith, Paul; Cloke, Hannah
2017-04-01
Seasonal streamflow forecasts are of great value for the socio-economic sector, for applications such as navigation, flood and drought mitigation and reservoir management for hydropower generation and water allocation to agriculture and drinking water. However, as we speak, the performance of dynamical seasonal hydrological forecasting systems (systems based on running seasonal meteorological forecasts through a hydrological model to produce seasonal hydrological forecasts) is still limited in space and time. In this context, the ESP (Ensemble Streamflow Prediction) remains an attractive forecasting method for seasonal streamflow forecasting as it relies on forcing a hydrological model (starting from the latest observed or simulated initial hydrological conditions) with historical meteorological observations. This makes it cheaper to run than a standard dynamical seasonal hydrological forecasting system, for which the seasonal meteorological forecasts will first have to be produced, while still producing skilful forecasts. There is thus the need to focus resources and time towards improvements in dynamical seasonal hydrological forecasting systems which will eventually lead to significant improvements in the skill of the streamflow forecasts generated. Sensitivity analyses are a powerful tool that can be used to disentangle the relative contributions of the two main sources of errors in seasonal streamflow forecasts, namely the initial hydrological conditions (IHC; e.g., soil moisture, snow cover, initial streamflow, among others) and the meteorological forcing (MF; i.e., seasonal meteorological forecasts of precipitation and temperature, input to the hydrological model). Sensitivity analyses are however most useful if they inform and change current operational practices. To this end, we propose a method to improve the design of a seasonal hydrological forecasting system. This method is based on sensitivity analyses, informing the forecasters as to which element of the forecasting chain (i.e., IHC or MF) could potentially lead to the highest increase in seasonal hydrological forecasting performance, after each forecast update.
2012-06-02
regional climate model downscaling , J. Geophys. Res., 117, D11103, doi:10.1029/2012JD017692. 1. Introduction [2] Modeling studies and data analyses...based on ground and satellite data have demonstrated that the land surface state variables, such as soil moisture, snow, vegetation, and soil temperature... downscaling rather than simply applying reanal- ysis data as LBC for both Eta control and sensitivity experiments as done in many RCM sensitivity studies
Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde
2017-01-01
Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.
Blázquez-Pérez, Antonio; San Miguel, Ramón; Mar, Javier
2013-10-01
Chronic hepatitis C is the leading cause of chronic liver disease, representing a significant burden in terms of morbidity, mortality and costs. A new scenario of therapy for hepatitis C virus (HCV) genotype 1 infection is being established with the approval of two effective HCV protease inhibitors (PIs) in combination with the standard of care (SOC), peginterferon and ribavirin. Our objective was to estimate the cost effectiveness of combination therapy with new PIs (boceprevir and telaprevir) plus peginterferon and ribavirin versus SOC in treatment-naive patients with HCV genotype 1 according to data obtained from clinical trials (CTs). A Markov model simulating chronic HCV progression was used to estimate disease treatment costs and effects over patients' lifetimes, in the Spanish national public healthcare system. The target population was treatment-naive patients with chronic HCV genotype 1, demographic characteristics for whom were obtained from the published pivotal CTs SPRINT and ADVANCE. Three options were analysed for each PI based on results from the two CTs: universal triple therapy, interleukin (IL)-28B-guided therapy and dual therapy with peginterferon and ribavirin. A univariate sensitivity analysis was performed to evaluate the uncertainty of certain parameters: age at start of treatment, transition probabilities, drug costs, CT efficacy results and a higher hazard ratio for all-cause mortality for patients with chronic HCV. Probabilistic sensitivity analyses were also carried out. Incremental cost-effectiveness ratios (ICERs) of €2012 per quality-adjusted life-year (QALY) gained were used as outcome measures. According to the base-case analysis, using dual therapy as the comparator, the alternative IL28B-guided therapy presents a more favorable ICER (€18,079/QALY for boceprevir and €25,914/QALY for telaprevir) than the universal triple therapy option (€27,594/QALY for boceprevir and €33,751/QALY for telaprevir), with an ICER clearly below the efficiency threshold for medical interventions in the Spanish setting. Sensitivity analysis showed that age at the beginning of treatment was an important factor that influenced the ICER. A potential reduction in PI costs would also clearly improve the ICER, and transition probabilities influenced the results, but to a lesser extent. Probabilistic sensitivity analyses showed that 95 % of the simulations presented an ICER below €40,000/QALY. Post hoc estimations of sustained virological responses of the IL28B-guided therapeutic option represented a limitation of the study. The therapeutic options analysed for the base-case cohort can be considered cost-effective interventions for the Spanish healthcare framework. Sensitivity analysis estimated an acceptability threshold of the IL28B-guided strategy of patients younger than 60 years.
Neutrophil/lymphocyte ratio and platelet/lymphocyte ratio in mood disorders: A meta-analysis.
Mazza, Mario Gennaro; Lucchi, Sara; Tringali, Agnese Grazia Maria; Rossetti, Aurora; Botti, Eugenia Rossana; Clerici, Massimo
2018-06-08
The immune and inflammatory system is involved in the etiology of mood disorders. Neutrophil/lymphocyte ratio (NLR), platelet/lymphocyte ratio (PLR) and monocyte/lymphocyte ratio (MLR) are inexpensive and reproducible biomarkers of inflammation. This is the first meta-analysis exploring the role of NLR and PLR in mood disorder. We identified 11 studies according to our inclusion criteria from the main Electronic Databases. Meta-analyses were carried out generating pooled standardized mean differences (SMDs) between index and healthy controls (HC). Heterogeneity was estimated. Relevant sensitivity and meta-regression analyses were conducted. Subjects with bipolar disorder (BD) had higher NLR and PLR as compared with HC (respectively SMD = 0.672; p < 0.001; I 2 = 82.4% and SMD = 0.425; p = 0.048; I 2 = 86.53%). Heterogeneity-based sensitivity analyses confirmed these findings. Subgroup analysis evidenced an influence of bipolar phase on the overall estimate whit studies including subjects in manic and any bipolar phase showing a significantly higher NLR and PLR as compared with HC whereas the effect was not significant among studies including only euthymic bipolar subjects. Meta-regression showed that age and sex influenced the relationship between BD and NLR but not the relationship between BD and PLR. Meta-analysis was not carried out for MLR because our search identified only one study when comparing BD to HC, and only one study when comparing MDD to HC. Subjects with major depressive disorder (MDD) had higher NLR as compared with HC (SMD = 0.670; p = 0.028; I 2 = 89.931%). Heterogeneity-based sensitivity analyses and meta-regression confirmed these findings. Our meta-analysis supports the hypothesis that an inflammatory activation occurs in mood disorders and NLR and PLR may be useful to detect this activation. More researches including comparison of NLR, PLR and MLR between different bipolar phases and between BD and MDD are needed. Copyright © 2018 Elsevier Inc. All rights reserved.
Rifkin-Graboi, A; Kong, L; Sim, L W; Sanmugam, S; Broekman, B F P; Chen, H; Wong, E; Kwek, K; Saw, S-M; Chong, Y-S; Gluckman, P D; Fortier, M V; Pederson, D; Meaney, M J; Qiu, A
2015-01-01
Mechanisms underlying the profound parental effects on cognitive, emotional and social development in humans remain poorly understood. Studies with nonhuman models suggest variations in parental care affect the limbic system, influential to learning, autobiography and emotional regulation. In some research, nonoptimal care relates to decreases in neurogenesis, although other work suggests early-postnatal social adversity accelerates the maturation of limbic structures associated with emotional learning. We explored whether maternal sensitivity predicts human limbic system development and functional connectivity patterns in a small sample of human infants. When infants were 6 months of age, 20 mother–infant dyads attended a laboratory-based observational session and the infants underwent neuroimaging at the same age. After considering age at imaging, household income and postnatal maternal anxiety, regression analyses demonstrated significant indirect associations between maternal sensitivity and bilateral hippocampal volume at six months, with the majority of associations between sensitivity and the amygdala demonstrating similar indirect, but not significant results. Moreover, functional analyses revealed direct associations between maternal sensitivity and connectivity between the hippocampus and areas important for emotional regulation and socio-emotional functioning. Sensitivity additionally predicted indirect associations between limbic structures and regions related to autobiographical memory. Our volumetric results are consistent with research indicating accelerated limbic development in response to early social adversity, and in combination with our functional results, if replicated in a larger sample, may suggest that subtle, but important, variations in maternal care influence neuroanatomical trajectories important to future cognitive and emotional functioning. PMID:26506054
2012-01-01
Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944
Adrion, Christine; Mansmann, Ulrich
2012-09-10
A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.
USDA-ARS?s Scientific Manuscript database
As the sophistication and sensitivity of chemical instrumentation increases so do the number of applications. Correspondingly, new questions and opportunities for systems previously studied also arise. As with most plants, the emission of volatiles from agricultural products is complex and varies am...
Disability Rights, Gender, and Development: A Resource Tool for Action. Full Report
ERIC Educational Resources Information Center
de Silva de Alwis, Rangita
2008-01-01
This resource tool builds a normative framework to examine the intersections of disability rights and gender in the human rights based approach to development. Through case studies, good practices and analyses the research tool makes recommendations and illustrates effective tools for the implementation of gender and disability sensitive laws,…
An earlier paper (Hattis et al., 2003) developed a quantitative likelihood-based statistical analysis of the differences in apparent sensitivity of rodents to mutagenic carcinogens across three life stages (fetal, birth-weaning, and weaning-60 days) relative to exposures in adult...
NASA Astrophysics Data System (ADS)
Leckebusch, G. C.; Kirchner-Bossi, N. O.; Befort, D. J.; Ulbrich, U.
2015-12-01
Time-clustered mid-latitude winter storms are responsible for a large portion of the overall windstorm-related damage in Europe. Thus, its study entails a high meteorological interest, while its outcome can result in a crucial utility for the (re)insurance industry. In addition to existing cyclone-based studies, here we use an event identification approach based on surface near wind speeds only, to investigate windstorm clustering and compare it to cyclone clustering. Specifically, cyclone and windstorm tracks are identified for winter 1979-2013 (Oct-Mar), to perform two sensitivity analyses on event-clustering in the North Atlantic using ERA-Interim Reanalysis. First, the link between clustering and cyclone intensity is analysed and compared to windstorms. Secondly, the sensitivity of clustering on intra-seasonal time scales is investigated, for both cyclones and windstorms. The wind-based approach reveals additional regions of clustering over Western Europe, which could be related to extreme damages, showing the added value of investigating wind field derived tracks in addition to that of cyclone tracks. Previous studies indicate a higher degree of clustering for stronger cyclones. However, our results show that this assumption is not always met. Although a positive relationship is confirmed for the clustering centre located over Iceland, clustering off the coast of the Iberian Peninsula behaves opposite. Even though this region shows the highest clustering, most of its signal is due to cyclones with intensities below the 70th percentile of the Laplacian of MSLP. Results on the sensitivity of clustering to the time of the winter season (Oct-Mar) show a temporal evolution of the clustering patterns, for both windstorms and cyclones. Compared to all cyclones, clustering of windstorms and strongest cyclones culminate around February, while all cyclone clustering peak in December to January.
Novel Primer Sets for Next Generation Sequencing-Based Analyses of Water Quality
Lee, Elvina; Khurana, Maninder S.; Whiteley, Andrew S.; Monis, Paul T.; Bath, Andrew; Gordon, Cameron; Ryan, Una M.; Paparini, Andrea
2017-01-01
Next generation sequencing (NGS) has rapidly become an invaluable tool for the detection, identification and relative quantification of environmental microorganisms. Here, we demonstrate two new 16S rDNA primer sets, which are compatible with NGS approaches and are primarily for use in water quality studies. Compared to 16S rRNA gene based universal primers, in silico and experimental analyses demonstrated that the new primers showed increased specificity for the Cyanobacteria and Proteobacteria phyla, allowing increased sensitivity for the detection, identification and relative quantification of toxic bloom-forming microalgae, microbial water quality bioindicators and common pathogens. Significantly, Cyanobacterial and Proteobacterial sequences accounted for ca. 95% of all sequences obtained within NGS runs (when compared to ca. 50% with standard universal NGS primers), providing higher sensitivity and greater phylogenetic resolution of key water quality microbial groups. The increased selectivity of the new primers allow the parallel sequencing of more samples through reduced sequence retrieval levels required to detect target groups, potentially reducing NGS costs by 50% but still guaranteeing optimal coverage and species discrimination. PMID:28118368
Hypoglycemia alarm enhancement using data fusion.
Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W
2010-01-01
The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.
Actors and networks in resource conflict resolution under climate change in rural Kenya
NASA Astrophysics Data System (ADS)
Ngaruiya, Grace W.; Scheffran, Jürgen
2016-05-01
The change from consensual decision-making arrangements into centralized hierarchical chieftaincy schemes through colonization disrupted many rural conflict resolution mechanisms in Africa. In addition, climate change impacts on land use have introduced additional socio-ecological factors that complicate rural conflict dynamics. Despite the current urgent need for conflict-sensitive adaptation, resolution efficiency of these fused rural institutions has hardly been documented. In this context, we analyse the Loitoktok network for implemented resource conflict resolution structures and identify potential actors to guide conflict-sensitive adaptation. This is based on social network data and processes that are collected using the saturation sampling technique to analyse mechanisms of brokerage. We find that there are three different forms of fused conflict resolution arrangements that integrate traditional institutions and private investors in the community. To effectively implement conflict-sensitive adaptation, we recommend the extension officers, the council of elders, local chiefs and private investors as potential conduits of knowledge in rural areas. In conclusion, efficiency of these fused conflict resolution institutions is aided by the presence of holistic resource management policies and diversification in conflict resolution actors and networks.
The lymphocyte transformation test for the diagnosis of drug allergy: sensitivity and specificity.
Nyfeler, B; Pichler, W J
1997-02-01
The diagnosis of a drug allergy is mainly based upon a very detailed history and the clinical findings. In addition, several in vitro or in vivo tests can be performed to demonstrate a sensitization to a certain drug. One of the in vitro tests is the lymphocyte transformation test (LTT), which can reveal a sensitization of T-cells by an enhanced proliferative response of peripheral blood mononuclear cells to a certain drug. To evaluate the sensitivity and specificity of the LTT, 923 case histories of patients with suspected drug allergy in whom a LTT was performed were retrospectively analysed. Based on the history and provocation tests, the probability (P) of a drug allergy was estimated to be > 0.9, 0.5-0.9, 0.1-0.5 or < 0.1, and was put in relation to a positive or negative LTT. Seventy-eight of 100 patients with a very likely drug allergy (P > 0.9) had a positive LTT, which indicates a sensitivity of 78%. If allergies to betalactam-antibiotics were analysed separately, the sensitivity was 74.4%. Fifteen of 102 patients where a classical drug allergy could be excluded (P < 0.1), had nevertheless a positive LTT (specificity thus 85%). The majority of these cases were classified as so-called pseudo-allergic reaction to NSAIDs. Patients with a clear history and clinical findings for a cotrimoxazole-related allergy, all had a positive LTT (6/6), and in patients who reacted to drugs containing proteins, sensitization could be demonstrated as well (i.e. hen's egg lysozyme, 7/7). In 632 of the 923 cases, skin tests were also performed (scratch and/or epicutaneous), for which we found a lower sensitivity than for the LTT (64%), while the specificity was the same (85%). Although our data are somewhat biased by the high number of penicillin allergies and cannot be generalized to drug allergies caused by other compounds, we conclude that the LTT is a useful diagnostic test in drug allergies, able to support the diagnosis of a drug allergy and to pinpoint the relevant drug.
Li, Xin; Kaattari, Stephen L.; Vogelbein, Mary A.; Vadas, George G.; Unger, Michael A.
2016-01-01
Immunoassays based on monoclonal antibodies (mAbs) are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs) and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC50 values between 1.68–31 μg/L (ppb). 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3-5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC-MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples. PMID:26925369
Validation of a portable nitric oxide analyzer for screening in primary ciliary dyskinesias.
Harris, Amanda; Bhullar, Esther; Gove, Kerry; Joslin, Rhiannon; Pelling, Jennifer; Evans, Hazel J; Walker, Woolf T; Lucas, Jane S
2014-02-10
Nasal nitric oxide (nNO) levels are very low in primary ciliary dyskinesia (PCD) and it is used as a screening test. We assessed the reliability and usability of a hand-held analyser in comparison to a stationary nitric oxide (NO) analyser in 50 participants (15 healthy, 13 PCD, 22 other respiratory diseases; age 6-79 years). Nasal NO was measured using a stationary NO analyser during a breath-holding maneuver, and using a hand-held analyser during tidal breathing, sampling at 2 ml/sec or 5 ml/sec. The three methods were compared for their specificity and sensitivity as a screen for PCD, their success rate in different age groups, within subject repeatability and acceptability. Correlation between methods was assessed. Valid nNO measurements were obtained in 94% of participants using the stationary analyser, 96% using the hand-held analyser at 5 ml/sec and 76% at 2 ml/sec. The hand-held device at 5 ml/sec had excellent sensitivity and specificity as a screening test for PCD during tidal breathing (cut-off of 30 nL/min,100% sensitivity, >95% specificity). The cut-off using the stationary analyser during breath-hold was 38 nL/min (100% sensitivity, 95% specificity). The stationary and hand-held analyser (5 ml/sec) showed reasonable within-subject repeatability(% coefficient of variation = 15). The hand-held NO analyser provides a promising screening tool for PCD.
A hybrid cost-sensitive ensemble for imbalanced breast thermogram classification.
Krawczyk, Bartosz; Schaefer, Gerald; Woźniak, Michał
2015-11-01
Early recognition of breast cancer, the most commonly diagnosed form of cancer in women, is of crucial importance, given that it leads to significantly improved chances of survival. Medical thermography, which uses an infrared camera for thermal imaging, has been demonstrated as a particularly useful technique for early diagnosis, because it detects smaller tumors than the standard modality of mammography. In this paper, we analyse breast thermograms by extracting features describing bilateral symmetries between the two breast areas, and present a classification system for decision making. Clearly, the costs associated with missing a cancer case are much higher than those for mislabelling a benign case. At the same time, datasets contain significantly fewer malignant cases than benign ones. Standard classification approaches fail to consider either of these aspects. In this paper, we introduce a hybrid cost-sensitive classifier ensemble to address this challenging problem. Our approach entails a pool of cost-sensitive decision trees which assign a higher misclassification cost to the malignant class, thereby boosting its recognition rate. A genetic algorithm is employed for simultaneous feature selection and classifier fusion. As an optimisation criterion, we use a combination of misclassification cost and diversity to achieve both a high sensitivity and a heterogeneous ensemble. Furthermore, we prune our ensemble by discarding classifiers that contribute minimally to the decision making. For a challenging dataset of about 150 thermograms, our approach achieves an excellent sensitivity of 83.10%, while maintaining a high specificity of 89.44%. This not only signifies improved recognition of malignant cases, it also statistically outperforms other state-of-the-art algorithms designed for imbalanced classification, and hence provides an effective approach for analysing breast thermograms. Our proposed hybrid cost-sensitive ensemble can facilitate a highly accurate early diagnostic of breast cancer based on thermogram features. It overcomes the difficulties posed by the imbalanced distribution of patients in the two analysed groups. Copyright © 2015 Elsevier B.V. All rights reserved.
Time-series analyses of air pollution and mortality in the United States: a subsampling approach.
Moolgavkar, Suresh H; McClellan, Roger O; Dewanji, Anup; Turim, Jay; Luebeck, E Georg; Edwards, Melanie
2013-01-01
Hierarchical Bayesian methods have been used in previous papers to estimate national mean effects of air pollutants on daily deaths in time-series analyses. We obtained maximum likelihood estimates of the common national effects of the criteria pollutants on mortality based on time-series data from ≤ 108 metropolitan areas in the United States. We used a subsampling bootstrap procedure to obtain the maximum likelihood estimates and confidence bounds for common national effects of the criteria pollutants, as measured by the percentage increase in daily mortality associated with a unit increase in daily 24-hr mean pollutant concentration on the previous day, while controlling for weather and temporal trends. We considered five pollutants [PM10, ozone (O3), carbon monoxide (CO), nitrogen dioxide (NO2), and sulfur dioxide (SO2)] in single- and multipollutant analyses. Flexible ambient concentration-response models for the pollutant effects were considered as well. We performed limited sensitivity analyses with different degrees of freedom for time trends. In single-pollutant models, we observed significant associations of daily deaths with all pollutants. The O3 coefficient was highly sensitive to the degree of smoothing of time trends. Among the gases, SO2 and NO2 were most strongly associated with mortality. The flexible ambient concentration-response curve for O3 showed evidence of nonlinearity and a threshold at about 30 ppb. Differences between the results of our analyses and those reported from using the Bayesian approach suggest that estimates of the quantitative impact of pollutants depend on the choice of statistical approach, although results are not directly comparable because they are based on different data. In addition, the estimate of the O3-mortality coefficient depends on the amount of smoothing of time trends.
Rudolph, Abby E; Young, April M; Havens, Jennifer R
2017-11-01
Analyses that link contextual factors with individual-level data can improve our understanding of the "risk environment"; however, the accuracy of information provided by participants about locations where illegal/stigmatized behaviors occur may be influenced by privacy/confidentiality concerns that may vary by setting and/or data collection approach. We recruited thirty-five persons who use drugs from a rural Appalachian town and a Mid-Atlantic city to participate in in-depth interviews. Through thematic analyses, we identified and compared privacy/confidentiality concerns associated with two survey methods that (1) collect self-reported addresses/cross-streets and (2) use an interactive web-based map to find/confirm locations in rural and urban settings. Concerns differed more by setting than between methods. For example, (1) rural participants valued interviewer rapport and protections provided by the Certificate of Confidentiality more; (2) locations considered to be sensitive differed in rural (i.e., others' homes) and urban (i.e., where drugs were used) settings; and (3) urban participants were more likely to view providing cross-streets as an acceptable alternative to providing exact addresses for sensitive locations and to prefer the web-based map approach. Rural-urban differences in privacy/confidentiality concerns reflect contextual differences (i.e., where drugs are used/purchased, population density, and prior drug-related arrests). Strategies to alleviate concerns include: (1) obtain a Certificate of Confidentiality, (2) collect geographic data at the scale necessary for proposed analyses, and (3) permit participants to provide intersections/landmarks in close proximity to actual locations rather than exact addresses or to skip questions where providing an intersection/landmark would not obfuscate the actual address. Copyright © 2017 Elsevier Ltd. All rights reserved.
Whitty, Jennifer A; Crosland, Paul; Hewson, Kaye; Narula, Rajan; Nathan, Timothy R; Campbell, Peter A; Keller, Andrew; Scuffham, Paul A
2014-03-01
To compare the costs of photoselective vaporisation (PVP) and transurethral resection of the prostate (TURP) for management of symptomatic benign prostatic hyperplasia (BPH) from the perspective of a Queensland public hospital provider. A decision-analytic model was used to compare the costs of PVP and TURP. Cost inputs were sourced from an audit of patients undergoing PVP or TURP across three hospitals. The probability of re-intervention was obtained from secondary literature sources. Probabilistic and multi-way sensitivity analyses were used to account for uncertainty and test the impact of varying key assumptions. In the base case analysis, which included equipment, training and re-intervention costs, PVP was AU$ 739 (95% credible interval [CrI] -12 187 to 14 516) more costly per patient than TURP. The estimate was most sensitive to changes in procedural costs, fibre costs and the probability of re-intervention. Sensitivity analyses based on data from the most favourable site or excluding equipment and training costs reduced the point estimate to favour PVP (incremental cost AU$ -684, 95% CrI -8319 to 5796 and AU$ -100, 95% CrI -13 026 to 13 678, respectively). However, CrIs were wide for all analyses. In this cost minimisation analysis, there was no significant cost difference between PVP and TURP, after accounting for equipment, training and re-intervention costs. However, PVP was associated with a shorter length of stay and lower procedural costs during audit, indicating PVP potentially provides comparatively good value for money once the technology is established. © 2013 The Authors. BJU International © 2013 BJU International.
Sensitivity to volcanic field boundary
NASA Astrophysics Data System (ADS)
Runge, Melody; Bebbington, Mark; Cronin, Shane; Lindsay, Jan; Rashad Moufti, Mohammed
2016-04-01
Volcanic hazard analyses are desirable where there is potential for future volcanic activity to affect a proximal population. This is frequently the case for volcanic fields (regions of distributed volcanism) where low eruption rates, fertile soil, and attractive landscapes draw populations to live close by. Forecasting future activity in volcanic fields almost invariably uses spatial or spatio-temporal point processes with model selection and development based on exploratory analyses of previous eruption data. For identifiability reasons, spatio-temporal processes, and practically also spatial processes, the definition of a spatial region is required to which volcanism is confined. However, due to the complex and predominantly unknown sub-surface processes driving volcanic eruptions, definition of a region based solely on geological information is currently impossible. Thus, the current approach is to fit a shape to the known previous eruption sites. The class of boundary shape is an unavoidable subjective decision taken by the forecaster that is often overlooked during subsequent analysis of results. This study shows the substantial effect that this choice may have on even the simplest exploratory methods for hazard forecasting, illustrated using four commonly used exploratory statistical methods and two very different regions: the Auckland Volcanic Field, New Zealand, and Harrat Rahat, Kingdom of Saudi Arabia. For Harrat Rahat, sensitivity of results to boundary definition is substantial. For the Auckland Volcanic Field, the range of options resulted in similar shapes, nevertheless, some of the statistical tests still showed substantial variation in results. This work highlights the fact that when carrying out any hazard analysis on volcanic fields, it is vital to specify how the volcanic field boundary has been defined, assess the sensitivity of boundary choice, and to carry these assumptions and related uncertainties through to estimates of future activity and hazard analyses.
Cost Effectiveness of Influenza Vaccine Choices in Children Aged 2–8 Years in the U.S.
Smith, Kenneth J.; Raviotta, Jonathan M.; DePasse, Jay V.; Brown, Shawn T.; Shim, Eunha; Nowalk, Mary Patricia; Zimmerman, Richard K.
2015-01-01
Introduction Prior evidence found live attenuated influenza vaccine (LAIV) more effective than inactivated influenza vaccine (IIV) in children aged 2–8 years, leading CDC in 2014 to prefer LAIV use in this group. However, since 2013, LAIV has not proven superior, leading CDC in 2015 to rescind their LAIV preference statement. Here, the cost effectiveness of preferred LAIV use compared with IIV in children aged 2–8 years is estimated. Methods A Markov model estimated vaccination strategy cost effectiveness in terms of cost per quality-adjusted life year gained. Base case assumptions were: equal vaccine uptake, IIV use when LAIV was not indicated (in 11.7% of the cohort), and no indirect vaccination effects. Sensitivity analyses included estimates of indirect effects from both equation- and agent-based models. Analyses were performed in 2014–2015. Results Using prior effectiveness data in children aged 2–8 years (LAIV=83%, IIV=64%), preferred LAIV use was less costly and more effective than IIV (dominant), with results sensitive only to LAIV and IIV effectiveness variation. Using 2014–2015 U.S. effectiveness data (LAIV=0%, IIV=15%), IIV was dominant. In two-way sensitivity analyses, LAIV use was cost saving over the entire range of IIV effectiveness (0%–81%) when absolute LAIV effectiveness was >7.1% higher than IIV, but never cost saving when absolute LAIV effectiveness was <3.5% higher than IIV. Conclusions Results support CDC’s decision to no longer prefer LAIV use and provide guidance on effectiveness differences between influenza vaccines that might lead to preferential LAIV recommendation for children aged 2–8 years. PMID:26868283
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onoufriou, T.; Simpson, R.J.; Protopapas, M.
This paper presents the development and application of reliability based inspection planning techniques for floaters. Based on previous experience from jacket structure applications optimized inspection planning (OIP) techniques for floaters are developed. The differences between floaters and jacket structures in relation to fatigue damage, redundancy levels and inspection practice are examined and reflected in the proposed methodology. The application and benefits of these techniques is demonstrated through representative analyses and important trends are highlighted through the results of a parametric sensitivity study.
Efficient Gradient-Based Shape Optimization Methodology Using Inviscid/Viscous CFD
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1997-01-01
The formerly developed preconditioned-biconjugate-gradient (PBCG) solvers for the analysis and the sensitivity equations had resulted in very large error reductions per iteration; quadratic convergence was achieved whenever the solution entered the domain of attraction to the root. Its memory requirement was also lower as compared to a direct inversion solver. However, this memory requirement was high enough to preclude the realistic, high grid-density design of a practical 3D geometry. This limitation served as the impetus to the first-year activity (March 9, 1995 to March 8, 1996). Therefore, the major activity for this period was the development of the low-memory methodology for the discrete-sensitivity-based shape optimization. This was accomplished by solving all the resulting sets of equations using an alternating-direction-implicit (ADI) approach. The results indicated that shape optimization problems which required large numbers of grid points could be resolved with a gradient-based approach. Therefore, to better utilize the computational resources, it was recommended that a number of coarse grid cases, using the PBCG method, should initially be conducted to better define the optimization problem and the design space, and obtain an improved initial shape. Subsequently, a fine grid shape optimization, which necessitates using the ADI method, should be conducted to accurately obtain the final optimized shape. The other activity during this period was the interaction with the members of the Aerodynamic and Aeroacoustic Methods Branch of Langley Research Center during one stage of their investigation to develop an adjoint-variable sensitivity method using the viscous flow equations. This method had algorithmic similarities to the variational sensitivity methods and the control-theory approach. However, unlike the prior studies, it was considered for the three-dimensional, viscous flow equations. The major accomplishment in the second period of this project (March 9, 1996 to March 8, 1997) was the extension of the shape optimization methodology for the Thin-Layer Navier-Stokes equations. Both the Euler-based and the TLNS-based analyses compared with the analyses obtained using the CFL3D code. The sensitivities, again from both levels of the flow equations, also compared very well with the finite-differenced sensitivities. A fairly large set of shape optimization cases were conducted to study a number of issues previously not well understood. The testbed for these cases was the shaping of an arrow wing in Mach 2.4 flow. All the final shapes, obtained either from a coarse-grid-based or a fine-grid-based optimization, using either a Euler-based or a TLNS-based analysis, were all re-analyzed using a fine-grid, TLNS solution for their function evaluations. This allowed for a more fair comparison of their relative merits. From the aerodynamic performance standpoint, the fine-grid TLNS-based optimization produced the best shape, and the fine-grid Euler-based optimization produced the lowest cruise efficiency.
Alexander, Paul E; Bonner, Ashley J; Agarwal, Arnav; Li, Shelly-Anne; Hariharan, Abishek; Izhar, Zain; Bhatnagar, Neera; Alba, Carolina; Akl, Elie A; Fei, Yutong; Guyatt, Gordon H; Beyene, Joseph
2016-06-01
Prior studies regarding whether single-center trial estimates are larger than multi-center are equivocal. We examined the extent to which single-center trials yield systematically larger effects than multi-center trials. We searched the 119 core clinical journals and the Cochrane Database of Systematic Reviews for meta-analyses (MAs) of randomized controlled trials (RCTs) published during 2012. In this meta-epidemiologic study, for binary variables, we computed the pooled ratio of ORs (RORs), and for continuous outcomes mean difference in standardized mean differences (SMDs), we conducted weighted random-effects meta-regression and random-effects MA modeling. Our primary analyses were restricted to MAs that included at least five RCTs and in which at least 25% of the studies used each of single trial center (SC) and more trial center (MC) designs. We identified 81 MAs for the odds ratio (OR) and 43 for the SMD outcome measures. Based on our analytic plan, our primary analysis (core) is based on 25 MAs/241 RCTs (binary outcome) and 18 MAs/173 RCTs (continuous outcome). Based on the core analysis, we found no difference in magnitude of effect between SC and MC for binary outcomes [RORs: 1.02; 95% confidence interval (CI): 0.83, 1.24; I(2) 20.2%]. Effect sizes were systematically larger for SC than MC for the continuous outcome measure (mean difference in SMDs: -0.13; 95% CI: -0.21, -0.05; I(2) 0%). Our results do not support prior findings of larger effects in SC than MC trials addressing binary outcomes but show a very similar small increase in effect in SC than MC trials addressing continuous outcomes. Authors of systematic reviews would be wise to include all trials irrespective of SC vs. MC design and address SC vs. MC status as a possible explanation of heterogeneity (and consider sensitivity analyses). Copyright © 2015 Elsevier Inc. All rights reserved.
Which Measures of Online Control Are Least Sensitive to Offline Processes?
de Grosbois, John; Tremblay, Luc
2018-02-28
A major challenge to the measurement of online control is the contamination by offline, planning-based processes. The current study examined the sensitivity of four measures of online control to offline changes in reaching performance induced by prism adaptation and terminal feedback. These measures included the squared Z scores (Z 2 ) of correlations of limb position at 75% movement time versus movement end, variable error, time after peak velocity, and a frequency-domain analysis (pPower). The results indicated that variable error and time after peak velocity were sensitive to the prism adaptation. Furthermore, only the Z 2 values were biased by the terminal feedback. Ultimately, the current study has demonstrated the sensitivity of limb kinematic measures to offline control processes and that pPower analyses may yield the most suitable measure of online control.
Personal use of hair dyes and the risk of bladder cancer: results of a meta-analysis.
Huncharek, Michael; Kupelnick, Bruce
2005-01-01
OBJECTIVE: This study examined the methodology of observational studies that explored an association between personal use of hair dye products and the risk of bladder cancer. METHODS: Data were pooled from epidemiological studies using a general variance-based meta-analytic method that employed confidence intervals. The outcome of interest was a summary relative risk (RRs) reflecting the risk of bladder cancer development associated with use of hair dye products vs. non-use. Sensitivity analyses were performed to explain any observed statistical heterogeneity and to explore the influence of specific study characteristics of the summary estimate of effect. RESULTS: Initially combining homogenous data from six case-control and one cohort study yielded a non-significant RR of 1.01 (0.92, 1.11), suggesting no association between hair dye use and bladder cancer development. Sensitivity analyses examining the influence of hair dye type, color, and study design on this suspected association showed that uncontrolled confounding and design limitations contributed to a spurious non-significant summary RR. The sensitivity analyses yielded statistically significant RRs ranging from 1.22 (1.11, 1.51) to 1.50 (1.30, 1.98), indicating that personal use of hair dye products increases bladder cancer risk by 22% to 50% vs. non-use. CONCLUSION: The available epidemiological data suggest an association between personal use of hair dye products and increased risk of bladder cancer. PMID:15736329
Economic evaluation of ezetimibe treatment in combination with statin therapy in the United States.
Davies, Glenn M; Vyas, Ami; Baxter, Carl A
2017-07-01
This study assessed the cost-effectiveness of ezetimibe with statin therapy vs statin monotherapy from a US payer perspective, assuming the impending patent expiration of ezetimibe. A Markov-like economic model consisting of 28 distinct health states was used. Model population data were obtained from US linked claims and electronic medical records, with inclusion criteria based on diagnostic guidelines. Inputs came from recent clinical trials, meta-analyses, and cost-effectiveness analyses. The base-case scenario was used to evaluate the cost-effectiveness of adding ezetimibe 10 mg to statin in patients aged 35-74 years with a history of coronary heart disease (CHD) and/or stroke, and with low-density lipoprotein cholesterol (LDL-C) levels ≥70 mg/dL over a lifetime horizon, assuming a 90% price reduction of ezetimibe after 1 year to take into account the impending patent expiration in the second quarter of 2017. Sub-group analyses included patients with LDL-C levels ≥100 mg/dL and patients with diabetes with LDL-C levels ≥70 mg/dL. The lifetime discounted incremental cost-effectiveness ratio (ICER) for ezetimibe added to statin was $9,149 per quality-adjusted life year (QALY) for the base-case scenario. For patients with LDL-C levels ≥100 mg/dL, the ICER was $839/QALY; for those with diabetes and LDL-C levels ≥70 mg/dL, it was $560/QALY. One-way sensitivity analyses showed that the model was sensitive to changes in cost of ezetimibe, rate reduction of non-fatal CHD, and utility weight for non-fatal CHD in the base-case and sub-group analyses. Indirect costs or treatment discontinuation estimation were not included. Compared with statin monotherapy, ezetimibe with statin therapy was cost-effective for secondary prevention of CHD and stroke and for primary prevention of these conditions in patients whose LDL-C levels are ≥100 mg/dL and in patients with diabetes, taking into account a 90% cost reduction for ezetimibe.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
Power and sensitivity of alternative fit indices in tests of measurement invariance.
Meade, Adam W; Johnson, Emily C; Braddy, Phillip W
2008-05-01
Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Zhang, Xianxia; Xiao, Kunyi; Cheng, Liwei; Chen, Hui; Liu, Baohong; Zhang, Song; Kong, Jilie
2014-06-03
Rapid and efficient detection of cancer cells at their earliest stages is one of the central challenges in cancer diagnostics. We developed a simple, cost-effective, and highly sensitive colorimetric method for visually detecting rare cancer cells based on cell-triggered cyclic enzymatic signal amplification (CTCESA). In the absence of target cells, hairpin aptamer probes (HAPs) and linker DNAs stably coexist in solution, and the linker DNA assembles DNA-AuNPs, producing a purple solution. In the presence of target cells, the specific binding of HAPs to the target cells triggers a conformational switch that results in linker DNA hybridization and cleavage by nicking endonuclease-strand scission cycles. Consequently, the cleaved fragments of linker DNA can no longer assemble into DNA-AuNPs, resulting in a red color. UV-vis spectrometry and photograph analyses demonstrated that this CTCESA-based method exhibited selective and sensitive colorimetric responses to the presence of target CCRF-CEM cells, which could be detected by the naked eye. The linear response for CCRF-CEM cells in a concentration range from 10(2) to 10(4) cells was obtained with a detection limit of 40 cells, which is approximately 20 times lower than the detection limit of normal AuNP-based methods without amplification. Given the high specificity and sensitivity of CTCESA, this colorimetric method provides a sensitive, label-free, and cost-effective approach for early cancer diagnosis and point-to-care applications.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Catherine A. Eyre; Melina Kozanitas; Matteo Garbelotto
2013-01-01
We present a study of the epidemiology of sudden oak death (SOD) in California within a watershed based on temporally and spatially replicated surveys of symptoms, viability of the pathogen from symptomatic leaves, and genetic analyses using polymorphic SSR markers.Phytophthora ramorum is sensitive to climate; its...
Enhanced Lipidome Coverage in Shotgun Analyses by using Gas-Phase Fractionation
NASA Astrophysics Data System (ADS)
Nazari, Milad; Muddiman, David C.
2016-11-01
A high resolving power shotgun lipidomics strategy using gas-phase fractionation and data-dependent acquisition (DDA) was applied toward comprehensive characterization of lipids in a hen ovarian tissue in an untargeted fashion. Using this approach, a total of 822 unique lipids across a diverse range of lipid categories and classes were identified based on their MS/MS fragmentation patterns. Classes of glycerophospholipids and glycerolipids, such as glycerophosphocholines (PC), glycerophosphoethanolamines (PE), and triglycerides (TG), are often the most abundant peaks observed in shotgun lipidomics analyses. These ions suppress the signal from low abundance ions and hinder the chances of characterizing low abundant lipids when DDA is used. These issues were circumvented by utilizing gas-phase fractionation, where DDA was performed on narrow m/z ranges instead of a broad m/z range. Employing gas-phase fractionation resulted in an increase in sensitivity by more than an order of magnitude in both positive- and negative-ion modes. Furthermore, the enhanced sensitivity increased the number of lipids identified by a factor of ≈4, and facilitated identification of low abundant lipids from classes such as cardiolipins that are often difficult to observe in untargeted shotgun analyses and require sample-specific preparation steps prior to analysis. This method serves as a resource for comprehensive profiling of lipids from many different categories and classes in an untargeted manner, as well as for targeted and quantitative analyses of individual lipids. Furthermore, this comprehensive analysis of the lipidome can serve as a species- and tissue-specific database for confident identification of other MS-based datasets, such as mass spectrometry imaging.
Zhan, Mei; Zheng, Hanrui; Xu, Ting; Yang, Yu; Li, Qiu
2017-08-01
Malignant pleural mesothelioma (MPM) is a rare malignancy, and pemetrexed/cisplatin (PC) is the gold standard first-line regime. This study evaluated the cost-effectiveness of the addition of bevacizumab to PC (with maintenance bevacizumab) for unresectable MPM based on a phase III trial that showed a survival benefit compared with chemotherapy alone. To estimate the incremental cost-effectiveness ratio (ICER) of the incorporation of bevacizumab, a Markov model based on the MAPS trial, including the disease states of progression-free survival, progressive disease and death, was used. Total costs were calculated from a Chinese payer perspective, and health outcomes were converted into quality-adjusted life year (QALY). Model robustness was explored in sensitivity analyses. The addition of bevacizumab to PC was estimated to increase the cost by $81446.69, with a gain of 0.112 QALYs, resulting in an ICER of $727202.589 per QALY. In both one-way sensitivity and probabilistic sensitivity analyses, the ICER exceeded the commonly accepted willingness-to-pay threshold of 3 times the gross domestic product per capita of China ($23970.00 per QALY). The cost of bevacizumab had the most important impact on the ICER. The combination of bevacizumab with PC chemotherapy is not a cost-effective treatment option for MPM in China. Given its positive clinical value and extremely low incidence of MPM, an appropriate price discount, assistance programs and medical insurance should be considered to make bevacizumab more affordable for this rare patient population. Copyright © 2017 Elsevier B.V. All rights reserved.
Weinstein, Yael; Levav, Itzhak; Gelkopf, Marc; Roe, David; Yoffe, Rinat; Pugachova, Inna; Levine, Stephen Z
2018-04-20
This study tested the hypothesis that maternal exposure to terror attacks during pregnancy is associated with the risk of schizophrenia in the offspring. A population-based study was conducted of Israeli children born between 1975 and 1995 and that were registered in the Ministry of Interior and followed up in the Ministry of Health from birth to 2015 for the risk of schizophrenia (N = 201,048). The association between maternal exposure to terror attacks during pregnancy and the risk of schizophrenia in the offspring was quantified with relative risks (RR) and their 95% confidence intervals (CI) fitting Cox regression models unadjusted and adjusted for confounders. Sensitivity analyses were performed to test the robustness of the results. The RR of schizophrenia in offspring of mothers exposed to terror attacks during pregnancy compared to offspring of mothers not exposed during pregnancy were estimated unadjusted (RR = 2.51, 95% CI, 1.33, 4.74) and adjusted (RR = 2.53, 95% CI, 1.63, 3.91). In the sensitivity analyses adjusted RRs were estimated using a sibling-based study design (2.85, 95% CI: 1.31-6.21) and propensity matching (2.45, 95% CI: 1.58-3.81). Maternal exposure to terror attacks during pregnancy was associated with an increased risk of schizophrenia in the offspring, possibly indicating a critical period of neurodevelopment that is sensitive to the stress of terror attacks and affected by epigenetic modifications. Copyright © 2018 Elsevier B.V. All rights reserved.
Greenland Regional and Ice Sheet-wide Geometry Sensitivity to Boundary and Initial conditions
NASA Astrophysics Data System (ADS)
Logan, L. C.; Narayanan, S. H. K.; Greve, R.; Heimbach, P.
2017-12-01
Ice sheet and glacier model outputs require inputs from uncertainly known initial and boundary conditions, and other parameters. Conservation and constitutive equations formalize the relationship between model inputs and outputs, and the sensitivity of model-derived quantities of interest (e.g., ice sheet volume above floatation) to model variables can be obtained via the adjoint model of an ice sheet. We show how one particular ice sheet model, SICOPOLIS (SImulation COde for POLythermal Ice Sheets), depends on these inputs through comprehensive adjoint-based sensitivity analyses. SICOPOLIS discretizes the shallow-ice and shallow-shelf approximations for ice flow, and is well-suited for paleo-studies of Greenland and Antarctica, among other computational domains. The adjoint model of SICOPOLIS was developed via algorithmic differentiation, facilitated by the source transformation tool OpenAD (developed at Argonne National Lab). While model sensitivity to various inputs can be computed by costly methods involving input perturbation simulations, the time-dependent adjoint model of SICOPOLIS delivers model sensitivities to initial and boundary conditions throughout time at lower cost. Here, we explore both the sensitivities of the Greenland Ice Sheet's entire and regional volumes to: initial ice thickness, precipitation, basal sliding, and geothermal flux over the Holocene epoch. Sensitivity studies such as described here are now accessible to the modeling community, based on the latest version of SICOPOLIS that has been adapted for OpenAD to generate correct and efficient adjoint code.
Kilonzo, Mary M; Brown, Steven R; Bruhn, Hanne; Cook, Jonathan A; Hudson, Jemma; Norrie, John; Watson, Angus J M; Wood, Jessica
2017-08-25
Our objective was to compare the cost effectiveness of stapled haemorrhoidopexy (SH) and traditional haemorrhoidectomy (TH) in the treatment of grade II-IV haemorrhoidal disease from the perspective of the UK national health service. An economic evaluation was conducted alongside an open, two-arm, parallel-group, pragmatic, multicentre, randomised controlled trial conducted in several hospitals in the UK. Patients were randomised into either SH or TH surgery between January 2011 and August 2014 and were followed up for 24 months. Intervention and subsequent resource use data were collected using case review forms and questionnaires. Benefits were collected using the EQ-5D-3L (EuroQoL-five dimensions-three levels) instrument. The primary economic outcome was incremental cost measured in pounds (£), year 2016 values, relative to the incremental benefit, which was estimated using quality-adjusted life-years (QALYs). Cost and benefits accrued in the second year were discounted at 3.5%. The base-case analysis was based on imputed data. Uncertainty was explored using univariate sensitivity analyses. Participants (n = 777) were randomised to SH (n = 389) or TH (n = 388). The mean cost of SH was £337 (95% confidence interval [CI] 251-423) higher than that of TH and the mean QALYs were -0.070 (95% CI -0.127 to -0.011) lower than for TH. The base-case cost-utility analysis indicated that SH has zero probability of being cost effective at both the £20,000 and the £30,000 threshold. Results from the sensitivity analyses were similar to those from the base-case analysis. The evidence suggests that, on average, the total mean costs over the 24-month follow-up period were significantly higher for the SH arm than for the TH arm. The QALYs were also, on average, significantly lower for the SH arm. These results were supported by the sensitivity analyses. Therefore, in terms of cost effectiveness, TH is a superior surgical treatment for the management of grade II-IV haemorrhoids when compared with SH.
Kishikawa, Naoya
2010-10-01
Quinones are compounds that have various characteristics such as a biological electron transporter, an industrial product and a harmful environmental pollutant. Therefore, an effective determination method for quinones is required in many fields. This review describes the development of sensitive and selective determination methods for quinones based on some detection principles and their application to analyses in environmental, pharmaceutical and biological samples. Firstly, a fluorescence method was developed based on fluorogenic derivatization of quinones and applied to environmental analysis. Secondly, a luminol chemiluminescence method was developed based on generation of reactive oxygen species through the redox cycle of quinone and applied to pharmaceutical analysis. Thirdly, a photo-induced chemiluminescence method was developed based on formation of reactive oxygen species and fluorophore or chemiluminescence enhancer by the photoreaction of quinones and applied to biological and environmental analyses.
Goldstein, Daniel A.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose The addition of bevacizumab to fluorouracil-based chemotherapy is a standard of care for previously untreated metastatic colorectal cancer. Continuation of bevacizumab beyond progression is an accepted standard of care based on a 1.4-month increase in median overall survival observed in a randomized trial. No United States–based cost-effectiveness modeling analyses are currently available addressing the use of bevacizumab in metastatic colorectal cancer. Our objective was to determine the cost effectiveness of bevacizumab in the first-line setting and when continued beyond progression from the perspective of US payers. Methods We developed two Markov models to compare the cost and effectiveness of fluorouracil, leucovorin, and oxaliplatin with or without bevacizumab in the first-line treatment and subsequent fluorouracil, leucovorin, and irinotecan with or without bevacizumab in the second-line treatment of metastatic colorectal cancer. Model robustness was addressed by univariable and probabilistic sensitivity analyses. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Results Using bevacizumab in first-line therapy provided an additional 0.10 QALYs (0.14 life-years) at a cost of $59,361. The incremental cost-effectiveness ratio was $571,240 per QALY. Continuing bevacizumab beyond progression provided an additional 0.11 QALYs (0.16 life-years) at a cost of $39,209. The incremental cost-effectiveness ratio was $364,083 per QALY. In univariable sensitivity analyses, the variables with the greatest influence on the incremental cost-effectiveness ratio were bevacizumab cost, overall survival, and utility. Conclusion Bevacizumab provides minimal incremental benefit at high incremental cost per QALY in both the first- and second-line settings of metastatic colorectal cancer treatment. PMID:25691669
ERIC Educational Resources Information Center
Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.
2009-01-01
The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…
Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln
2014-01-01
Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.
Lionetti, Francesca; Aron, Arthur; Aron, Elaine N; Burns, G Leonard; Jagiellowicz, Jadzia; Pluess, Michael
2018-01-22
According to empirical studies and recent theories, people differ substantially in their reactivity or sensitivity to environmental influences with some being generally more affected than others. More sensitive individuals have been described as orchids and less-sensitive ones as dandelions. Applying a data-driven approach, we explored the existence of sensitivity groups in a sample of 906 adults who completed the highly sensitive person (HSP) scale. According to factor analyses, the HSP scale reflects a bifactor model with a general sensitivity factor. In contrast to prevailing theories, latent class analyses consistently suggested the existence of three rather than two groups. While we were able to identify a highly sensitive (orchids, 31%) and a low-sensitive group (dandelions, 29%), we also detected a third group (40%) characterised by medium sensitivity, which we refer to as tulips in keeping with the flower metaphor. Preliminary cut-off scores for all three groups are provided. In order to characterise the different sensitivity groups, we investigated group differences regarding the Big Five personality traits, as well as experimentally assessed emotional reactivity in an additional independent sample. According to these follow-up analyses, the three groups differed in neuroticism, extraversion and emotional reactivity to positive mood induction with orchids scoring significantly higher in neuroticism and emotional reactivity and lower in extraversion than the other two groups (dandelions also differed significantly from tulips). Findings suggest that environmental sensitivity is a continuous and normally distributed trait but that people fall into three distinct sensitive groups along a sensitivity continuum.
Nooshadokht, Maryam; Kalantari-Khandani, Behjat; Sharifi, Iraj; Kamyabi, Hossein; Liyanage, Namal P M; Lagenaur, Laurel A; Kagnoff, Martin F; Singer, Steven M; Babaei, Zahra; Solaymani-Mohammadi, Shahram
2017-10-01
Human infection with the protozoan parasite Giardia duodenalis is one the most common parasitic diseases worldwide. Higher incidence rates of giardiasis have been reported from human subjects with multiple debilitating chronic conditions, including hypogammaglobulinemia and common variable immunodeficiency (CVID). In the current study, stool specimens were collected from 199 individuals diagnosed with HIV or cancer and immunocompetent subjects. The sensitivity of microscopy-based detection on fresh stool preparations, trichrome staining and stool antigen immunodetection for the diagnosis of G. duodenalis were 36%, 45.5% and 100%, respectively when compared with a highly sensitive stool-based PCR method as the gold standard. Further multilocus molecular analyses using glutamate dehydrogenase (gdh) and triose phosphate isomerase (tpi) loci demonstrated that the AI genotype of G. duodenalis was the most prevalent, followed by the AII genotype and mixed (AI+B) infections. We concluded that stool antigen immunodetection-based immunoassays and stool-based PCR amplification had comparable sensitivity and specificity for the diagnosis of G. duodenalis infections in these populations. Stool antigen detection-based diagnostic modalities are rapid and accurate and may offer alternatives to conventional microscopy and PCR-based diagnostic methods for the diagnosis of G. duodenalis in human subjects living with HIV or cancer. Copyright © 2017. Published by Elsevier B.V.
Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses.
Syrowatka, Ania; Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn
2016-01-26
Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than others. Integration of content control improved quality of decision making (SMD 0.59 vs 0.23 for knowledge; SMD 0.39 vs 0.29 for decisional conflict). In contrast, tailoring reduced quality of decision making (SMD 0.40 vs 0.71 for knowledge; SMD 0.25 vs 0.52 for decisional conflict). Similarly, patient narratives also reduced quality of decision making (SMD 0.43 vs 0.65 for knowledge; SMD 0.17 vs 0.46 for decisional conflict). Results were varied for different types of explicit values clarification, feedback, and social support. Integration of media rich or interactive features into computer-based decision aids can improve quality of preference-sensitive decision making. However, this is an emerging field with limited evidence to guide use. The systematic review and thematic synthesis identified features that have been integrated into available computer-based decision aids, in an effort to facilitate reporting of these features and to promote integration of such features into decision aids. The meta-analyses and associated subgroup analyses provide preliminary evidence to support integration of specific features into future decision aids. Further research can focus on clarifying independent contributions of specific features through experimental designs and refining the designs of features to improve effectiveness.
Blurring the Inputs: A Natural Language Approach to Sensitivity Analysis
NASA Technical Reports Server (NTRS)
Kleb, William L.; Thompson, Richard A.; Johnston, Christopher O.
2007-01-01
To document model parameter uncertainties and to automate sensitivity analyses for numerical simulation codes, a natural-language-based method to specify tolerances has been developed. With this new method, uncertainties are expressed in a natural manner, i.e., as one would on an engineering drawing, namely, 5.25 +/- 0.01. This approach is robust and readily adapted to various application domains because it does not rely on parsing the particular structure of input file formats. Instead, tolerances of a standard format are added to existing fields within an input file. As a demonstration of the power of this simple, natural language approach, a Monte Carlo sensitivity analysis is performed for three disparate simulation codes: fluid dynamics (LAURA), radiation (HARA), and ablation (FIAT). Effort required to harness each code for sensitivity analysis was recorded to demonstrate the generality and flexibility of this new approach.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Juffer, Femmie; Bakermans-Kranenburg, Marian J; van IJzendoorn, Marinus H
2017-06-01
Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) is a social-learning and attachment-based intervention using video feedback to support sensitive parenting and at the same time setting firm limits. Empirical studies and meta-analyses have shown that sensitive parenting is the key determinant to promote secure child-parent attachment relationships and that adequate parental discipline contributes to fewer behavior problems in children. Building on this evidence, VIPP-SD has been tested in various populations of at-risk parents and vulnerable children (in the age range of zero to six years), as well as in the context of child care. In twelve randomized controlled trials including 1116 parents and caregivers, VIPP-SD proved to be effective in promoting sensitive caregiving, while positive social-emotional child outcomes were also found. Copyright © 2017 Elsevier Ltd. All rights reserved.
Administrative database code accuracy did not vary notably with changes in disease prevalence.
van Walraven, Carl; English, Shane; Austin, Peter C
2016-11-01
Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.
Discussions On Worst-Case Test Condition For Single Event Burnout
NASA Astrophysics Data System (ADS)
Liu, Sandra; Zafrani, Max; Sherman, Phillip
2011-10-01
This paper discusses the failure characteristics of single- event burnout (SEB) on power MOSFETs based on analyzing the quasi-stationary avalanche simulation curves. The analyses show the worst-case test condition for SEB would be using the ion that has the highest mass that would result in the highest transient current due to charge deposition and displacement damage. The analyses also show it is possible to build power MOSFETs that will not exhibit SEB even when tested with the heaviest ion, which have been verified by heavy ion test data on SEB sensitive and SEB immune devices.
Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses
Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David
2016-01-01
Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175
Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.
Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi
2017-05-01
Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.
Stanczyk, Nicola E.; Smit, Eline S.; Schulz, Daniela N.; de Vries, Hein; Bolman, Catherine; Muris, Jean W. M.; Evers, Silvia M. A. A.
2014-01-01
Background Although evidence exists for the effectiveness of web-based smoking cessation interventions, information about the cost-effectiveness of these interventions is limited. Objective The study investigated the cost-effectiveness and cost-utility of two web-based computer-tailored (CT) smoking cessation interventions (video- vs. text-based CT) compared to a control condition that received general text-based advice. Methods In a randomized controlled trial, respondents were allocated to the video-based condition (N = 670), the text-based condition (N = 708) or the control condition (N = 721). Societal costs, smoking status, and quality-adjusted life years (QALYs; EQ-5D-3L) were assessed at baseline, six-and twelve-month follow-up. The incremental costs per abstinent respondent and per QALYs gained were calculated. To account for uncertainty, bootstrapping techniques and sensitivity analyses were carried out. Results No significant differences were found in the three conditions regarding demographics, baseline values of outcomes and societal costs over the three months prior to baseline. Analyses using prolonged abstinence as outcome measure indicated that from a willingness to pay of €1,500, the video-based intervention was likely to be the most cost-effective treatment, whereas from a willingness to pay of €50,400, the text-based intervention was likely to be the most cost-effective. With regard to cost-utilities, when quality of life was used as outcome measure, the control condition had the highest probability of being the most preferable treatment. Sensitivity analyses yielded comparable results. Conclusion The video-based CT smoking cessation intervention was the most cost-effective treatment for smoking abstinence after twelve months, varying the willingness to pay per abstinent respondent from €0 up to €80,000. With regard to cost-utility, the control condition seemed to be the most preferable treatment. Probably, more time will be required to assess changes in quality of life. Future studies with longer follow-up periods are needed to investigate whether cost-utility results regarding quality of life may change in the long run. Trial Registration Nederlands Trial Register NTR3102 PMID:25310007
Stanczyk, Nicola E; Smit, Eline S; Schulz, Daniela N; de Vries, Hein; Bolman, Catherine; Muris, Jean W M; Evers, Silvia M A A
2014-01-01
Although evidence exists for the effectiveness of web-based smoking cessation interventions, information about the cost-effectiveness of these interventions is limited. The study investigated the cost-effectiveness and cost-utility of two web-based computer-tailored (CT) smoking cessation interventions (video- vs. text-based CT) compared to a control condition that received general text-based advice. In a randomized controlled trial, respondents were allocated to the video-based condition (N = 670), the text-based condition (N = 708) or the control condition (N = 721). Societal costs, smoking status, and quality-adjusted life years (QALYs; EQ-5D-3L) were assessed at baseline, six-and twelve-month follow-up. The incremental costs per abstinent respondent and per QALYs gained were calculated. To account for uncertainty, bootstrapping techniques and sensitivity analyses were carried out. No significant differences were found in the three conditions regarding demographics, baseline values of outcomes and societal costs over the three months prior to baseline. Analyses using prolonged abstinence as outcome measure indicated that from a willingness to pay of €1,500, the video-based intervention was likely to be the most cost-effective treatment, whereas from a willingness to pay of €50,400, the text-based intervention was likely to be the most cost-effective. With regard to cost-utilities, when quality of life was used as outcome measure, the control condition had the highest probability of being the most preferable treatment. Sensitivity analyses yielded comparable results. The video-based CT smoking cessation intervention was the most cost-effective treatment for smoking abstinence after twelve months, varying the willingness to pay per abstinent respondent from €0 up to €80,000. With regard to cost-utility, the control condition seemed to be the most preferable treatment. Probably, more time will be required to assess changes in quality of life. Future studies with longer follow-up periods are needed to investigate whether cost-utility results regarding quality of life may change in the long run. Nederlands Trial Register NTR3102.
Zhang, Haitao; Wu, Chenxue; Chen, Zewei; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules.
Wu, Chenxue; Liu, Zhao; Zhu, Yunhong
2017-01-01
Analyzing large-scale spatial-temporal k-anonymity datasets recorded in location-based service (LBS) application servers can benefit some LBS applications. However, such analyses can allow adversaries to make inference attacks that cannot be handled by spatial-temporal k-anonymity methods or other methods for protecting sensitive knowledge. In response to this challenge, first we defined a destination location prediction attack model based on privacy-sensitive sequence rules mined from large scale anonymity datasets. Then we proposed a novel on-line spatial-temporal k-anonymity method that can resist such inference attacks. Our anti-attack technique generates new anonymity datasets with awareness of privacy-sensitive sequence rules. The new datasets extend the original sequence database of anonymity datasets to hide the privacy-sensitive rules progressively. The process includes two phases: off-line analysis and on-line application. In the off-line phase, sequence rules are mined from an original sequence database of anonymity datasets, and privacy-sensitive sequence rules are developed by correlating privacy-sensitive spatial regions with spatial grid cells among the sequence rules. In the on-line phase, new anonymity datasets are generated upon LBS requests by adopting specific generalization and avoidance principles to hide the privacy-sensitive sequence rules progressively from the extended sequence anonymity datasets database. We conducted extensive experiments to test the performance of the proposed method, and to explore the influence of the parameter K value. The results demonstrated that our proposed approach is faster and more effective for hiding privacy-sensitive sequence rules in terms of hiding sensitive rules ratios to eliminate inference attacks. Our method also had fewer side effects in terms of generating new sensitive rules ratios than the traditional spatial-temporal k-anonymity method, and had basically the same side effects in terms of non-sensitive rules variation ratios with the traditional spatial-temporal k-anonymity method. Furthermore, we also found the performance variation tendency from the parameter K value, which can help achieve the goal of hiding the maximum number of original sensitive rules while generating a minimum of new sensitive rules and affecting a minimum number of non-sensitive rules. PMID:28767687
Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S
2017-08-01
Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.
Warmerdam, P G; de Koning, H J; Boer, R; Beemsterboer, P M; Dierks, M L; Swart, E; Robra, B P
1997-01-01
STUDY OBJECTIVE: To estimate quantitatively the impact of the quality of mammographic screening (in terms of sensitivity and specificity) on the effects and costs of nationwide breast cancer screening. DESIGN: Three plausible "quality" scenarios for a biennial breast cancer screening programme for women aged 50-69 in Germany were analysed in terms of costs and effects using the Microsimulation Screening Analysis model on breast cancer screening and the natural history of breast cancer. Firstly, sensitivity and specificity in the expected situation (or "baseline" scenario) were estimated from a model based analysis of empirical data from 35,000 screening examinations in two German pilot projects. In the second "high quality" scenario, these properties were based on the more favourable diagnostic results from breast cancer screening projects and the nationwide programme in The Netherlands. Thirdly, a worst case, "low quality" hypothetical scenario with a 25% lower sensitivity than that experienced in The Netherlands was analysed. SETTING: The epidemiological and social situation in Germany in relation to mass screening for breast cancer. RESULTS: In the "baseline" scenario, an 11% reduction in breast cancer mortality was expected in the total German female population, ie 2100 breast cancer deaths would be prevented per year. It was estimated that the "high quality" scenario, based on Dutch experience, would lead to the prevention of an additional 200 deaths per year and would also cut the number of false positive biopsy results by half. The cost per life year gained varied from Deutsche mark (DM) 15,000 on the "high quality" scenario to DM 21,000 in the "low quality" setting. CONCLUSIONS: Up to 20% of the total costs of a screening programme can be spent on quality improvement in order to achieve a substantially higher reduction in mortality and reduce undesirable side effects while retaining the same cost effectiveness ratio as that estimated from the German data. PMID:9196649
Ou, Huang-Tz; Lee, Tsung-Ying; Chen, Yee-Chun; Charbonneau, Claudie
2017-07-10
Cost-effectiveness studies of echinocandins for the treatment of invasive candidiasis, including candidemia, are rare in Asia. No study has determined whether echinocandins are cost-effective for both Candida albicans and non-albicans Candida species. There have been no economic evaluations that compare non-echinocandins with the three available echinocandins. This study was aimed to assess the cost-effectiveness of individual echinocandins, namely caspofungin, micafungin, and anidulafungin, versus non-echinocandins for C. albicans and non-albicans Candida species, respectively. A decision tree model was constructed to assess the cost-effectiveness of echinocandins and non-echinocandins for invasive candidiasis. The probability of treatment success, mortality rate, and adverse drug events were extracted from published clinical trials. The cost variables (i.e., drug acquisition) were based on Taiwan's healthcare system from the perspective of a medical payer. One-way sensitivity analyses and probability sensitivity analyses were conducted. For treating invasive candidiasis (all species), as compared to fluconazole, micafungin and caspofungin are dominated (less effective, more expensive), whereas anidulafungin is cost-effective (more effective, more expensive), costing US$3666.09 for each life-year gained, which was below the implicit threshold of the incremental cost-effectiveness ratio in Taiwan. For C. albicans, echinocandins are cost-saving as compared to non-echinocandins. For non-albicans Candida species, echinocandins are cost-effective as compared to non-echinocandins, costing US$652 for each life-year gained. The results were robust over a wide range of sensitivity analyses and were most sensitive to the clinical efficacy of antifungal treatment. Echinocandins, especially anidulafungin, appear to be cost-effective for invasive candidiasis caused by C. albicans and non-albicans Candida species in Taiwan.
Dalziel, Kim; Round, Ali; Garside, Ruth; Stein, Ken
2005-01-01
To evaluate the cost utility of imatinib compared with interferon (IFN)-alpha or hydroxycarbamide (hydroxyurea) for first-line treatment of chronic myeloid leukaemia. A cost-utility (Markov) model within the setting of the UK NHS and viewed from a health system perspective was adopted. Transition probabilities and relative risks were estimated from published literature. Costs of drug treatment, outpatient care, bone marrow biopsies, radiography, blood transfusions and inpatient care were obtained from the British National Formulary and local hospital databases. Costs (pound, year 2001-03 values) were discounted at 6%. Quality-of-life (QOL) data were obtained from the published literature and discounted at 1.5%. The main outcome measure was cost per QALY gained. Extensive one-way sensitivity analyses were performed along with probabilistic (stochastic) analysis. The incremental cost-effectiveness ratio (ICER) of imatinib, compared with IFNalpha, was pound26,180 per QALY gained (one-way sensitivity analyses ranged from pound19,449 to pound51,870) and compared with hydroxycarbamide was pound86,934 per QALY (one-way sensitivity analyses ranged from pound69,701 to pound147,095) [ pound1=$US1.691=euro1.535 as at 31 December 2002].Based on the probabilistic sensitivity analysis, 50% of the ICERs for imatinib, compared with IFNalpha, fell below a threshold of approximately pound31,000 per QALY gained. Fifty percent of ICERs for imatinib, compared with hydroxycarbamide, fell below approximately pound95,000 per QALY gained. This model suggests, given its underlying data and assumptions, that imatinib may be moderately cost effective when compared with IFNalpha but considerably less cost effective when compared with hydroxycarbamide. There are, however, many uncertainties due to the lack of long-term data.
Gandhoke, Gurpreet S; Pease, Matthew; Smith, Kenneth J; Sekula, Raymond F
2017-09-01
To perform a cost-minimization study comparing the supraorbital and endoscopic endonasal (EEA) approach with or without craniotomy for the resection of olfactory groove meningiomas (OGMs). We built a decision tree using probabilities of gross total resection (GTR) and cerebrospinal fluid (CSF) leak rates with the supraorbital approach versus EEA with and without additional craniotomy. The cost (not charge or reimbursement) at each "stem" of this decision tree for both surgical options was obtained from our hospital's finance department. After a base case calculation, we applied plausible ranges to all parameters and carried out multiple 1-way sensitivity analyses. Probabilistic sensitivity analyses confirmed our results. The probabilities of GTR (0.8) and CSF leak (0.2) for the supraorbital craniotomy were obtained from our series of 5 patients who underwent a supraorbital approach for the resection of an OGM. The mean tumor volume was 54.6 cm 3 (range, 17-94.2 cm 3 ). Literature-reported rates of GTR (0.6) and CSF leak (0.3) with EEA were applied to our economic analysis. Supraorbital craniotomy was the preferred strategy, with an expected value of $29,423, compared with an EEA cost of $83,838. On multiple 1-way sensitivity analyses, supraorbital craniotomy remained the preferred strategy, with a minimum cost savings of $46,000 and a maximum savings of $64,000. Probabilistic sensitivity analysis found the lowest cost difference between the 2 surgical options to be $37,431. Compared with EEA, supraorbital craniotomy provides substantial cost savings in the treatment of OGMs. Given the potential differences in effectiveness between approaches, a cost-effectiveness analysis should be undertaken. Copyright © 2017 Elsevier Inc. All rights reserved.
1-D grating based SPR biosensor for the detection of lung cancer biomarkers using Vroman effect
NASA Astrophysics Data System (ADS)
Teotia, Pradeep Kumar; Kaler, R. S.
2018-01-01
Grating based surface plasmon resonance waveguide biosensor have been reported for the detection of lung cancer biomarkers using Vroman effect. The proposed grating based multilayered biosensor is designed with high detection accuracy for Epidermal growth factor receptor (EGFR) and also analysed to show high detection accuracy with acceptable sensitivity for both cancer biomarkers. The introduction of periodic grating with multilayer metals generates a good resonance that make it possible for early detection of cancerous cells. Using finite difference time domain method, it is observed wavelength of biosensor get red-shifted on variations of the refractive index due to the presence of both the cancerous bio-markers. The reported detection accuracy and sensitivity of proposed biosensor is quite acceptable for both lung cancer biomarkers i.e. Carcinoembryonic antigen (CEA) and Epidermal growth factor receptor (EGFR) which further offer us label free early detection of lung cancer using these biomarkers.
Structural health monitoring based on sensitivity vector fields and attractor morphing.
Yin, Shih-Hsun; Epureanu, Bogdan I
2006-09-15
The dynamic responses of a thermo-shielding panel forced by unsteady aerodynamic loads and a classical Duffing oscillator are investigated to detect structural damage. A nonlinear aeroelastic model is obtained for the panel by using third-order piston theory to model the unsteady supersonic flow, which interacts with the panel. To identify damage, we analyse the morphology (deformation and movement) of the attractor of the dynamics of the aeroelastic system and the Duffing oscillator. Damages of various locations, extents and levels are shown to be revealed by the attractor-based analysis. For the panel, the type of damage considered is a local reduction in the bending stiffness. For the Duffing oscillator, variations in the linear and nonlinear stiffnesses and damping are considered as damage. Present studies of such problems are based on linear theories. In contrast, the presented approach using nonlinear dynamics has the potential of enhancing accuracy and sensitivity of detection.
NASA Astrophysics Data System (ADS)
Wang, Zixiao; Tan, Zhongwei; Xing, Rui; Liang, Linjun; Qi, Yanhui; Jian, Shuisheng
2016-10-01
A novel reflective liquid level sensor based on single-mode-offset coreless-single-mode (SOCS) fiber structure is proposed and experimentally demonstrated. Theory analyses and experimental results indicate that offset fusion can remarkably enhance the sensitivity of sensor. Ending-reflecting structure makes the sensor compact and easy to deploy. Meanwhile, we propose a laser sensing system, and the SOCS structure is used as sensing head and laser filter simultaneously. Experimental results show that laser spectra with high optical signal-to-noise ratio (-30 dB) and narrow 3-dB bandwidth (<0.15 nm) are achieved. Various liquids with different indices are used for liquid level sensing, besides, the refractive index sensitivity is also investigated. In measurement range, the sensing system presents steady laser output.
Highly sensitive biological sensor based on photonic crystal fiber
NASA Astrophysics Data System (ADS)
Azzam, Shaimaa I. H.; Hameed, Mohamed F.; Obayya, S. S. A.
2014-05-01
A photonic crystal fiber (PCF) surface plasmon resonance (SPR) based sensor is proposed and analysed. The proposed sensor consists of microuidic slots enclosing a dodecagonal layer of air holes cladding and a central air hole. The sensor can perform analyte detection using both HEx 11 and HEy 11 modes with a relatively high sensitivities up to 4000 nm=RIU and 3000 nm=RIU and resolutions of 2.5×10-5 RIU-1 and 3.33×10-5 RIU-1 with HEx11 and HEy11, respectively, with regards to spectral interrogation which to our knowledge are higher than those reported in the literature. Moreover, the structure of the suggested sensor is simple with no fabrication complexities which makes it easy to fabricate with standard PCF fabrication technologies.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.
Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J
2016-03-01
The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data
Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.
2016-01-01
Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243
Wysokińska, A.; Kondracki, S.; Iwanina, M.
2015-01-01
The present work describes experiments undertaken to evaluate the usefulness of selected physicochemical indices of semen, cell membrane integrity and sperm chromatin structure for the assessment of boar semen sensitivity to processes connected with pre-insemination procedures. The experiments were carried out on 30 boars: including 15 regarded as providers of sensitive semen and 15 regarded as providers of semen that is little sensitive to laboratory processing. The selection of boars for both groups was based on sperm morphology analyses, assuming secondary morphological change incidence in spermatozoa as the criterion. Two ejaculates were manually collected from each boar at an interval of 3 to 4 months. The following analyses were carried out for each ejaculate: sperm motility assessment, sperm pH measurement, sperm morphology assessment, sperm chromatin structure evaluation and cell membrane integrity assessment. The analyses were performed three times. Semen storage did not cause an increase in the incidence of secondary morphological changes in the group of boars considered to provide sperm of low sensitivity. On the other hand, with continued storage there was a marked increase in the incidence of spermatozoa with secondary morphological changes in the group of boars regarded as producing more sensitive semen. Ejaculates of group I boars evaluated directly after collection had an approximately 6% smaller share of spermatozoa with undamaged cell membranes than the ejaculates of boars in group II (p≤0.05). In the process of time the percentage of spermatozoa with undamaged cell membranes decreased. The sperm of group I boars was characterised with a lower sperm motility than the semen of group II boars. After 1 hour of storing diluted semen, the sperm motility of boars producing highly sensitive semen was already 4% lower (p≤0.05), and after 24 hours of storage it was 6.33% lower than that of the boars that produced semen with a low sensitivity. Factors that confirm the accuracy of insemination male selection can include a low rate of sperm motility decrease during the storage of diluted semen, low and contained incidence of secondary morphological changes in spermatozoa during semen storage and a high frequency of spermatozoa with undamaged cell membranes. PMID:26580438
Rivera, Fernando; Valladares, Manuel; Gea, Salvador; López-Martínez, Noemí
2017-06-01
To assess the cost-effectiveness of panitumumab in combination with mFOLFOX6 (oxaliplatin, 5-fluorouracil, and leucovorin) vs bevacizumab in combination with mFOLFOX6 as first-line treatment of patients with wild-type RAS metastatic colorectal cancer (mCRC) in Spain. A semi-Markov model was developed including the following health states: Progression free; Progressive disease: Treat with best supportive care; Progressive disease: Treat with subsequent active therapy; Attempted resection of metastases; Disease free after metastases resection; Progressive disease: after resection and relapse; and Death. Parametric survival analyses of patient-level progression free survival and overall survival data from the PEAK Phase II clinical trial were used to estimate health state transitions. Additional data from the PEAK trial were considered for the dose and duration of therapy, the use of subsequent therapy, the occurrence of adverse events, and the incidence and probability of time to metastasis resection. Utility weightings were calculated from patient-level data from panitumumab trials evaluating first-, second-, and third-line treatments. The study was performed from the Spanish National Health System (NHS) perspective including only direct costs. A life-time horizon was applied. Probabilistic sensitivity analyses and scenario sensitivity analyses were performed to assess the robustness of the model. Based on the PEAK trial, which demonstrated greater efficacy of panitumumab vs bevacizumab, both in combination with mFOLFOX6 first-line in wild-type RAS mCRC patients, the estimated incremental cost per life-year gained was €16,567 and the estimated incremental cost per quality-adjusted life year gained was €22,794. The sensitivity analyses showed the model was robust to alternative parameters and assumptions. The analysis was based on a simulation model and, therefore, the results should be interpreted cautiously. Based on the PEAK Phase II clinical trial and taking into account Spanish costs, the results of the analysis showed that first-line treatment of mCRC with panitumumab + mFOLFOX6 could be considered a cost-effective option compared with bevacizumab + mFOLFOX6 for the Spanish NHS.
A multivariate twin study of trait mindfulness, depressive symptoms, and anxiety sensitivity.
Waszczuk, Monika A; Zavos, Helena M S; Antonova, Elena; Haworth, Claire M; Plomin, Robert; Eley, Thalia C
2015-04-01
Mindfulness-based therapies have been shown to be effective in treating depression and reducing cognitive biases. Anxiety sensitivity is one cognitive bias that may play a role in the association between mindfulness and depressive symptoms. It refers to an enhanced sensitivity toward symptoms of anxiety, with a belief that these are harmful. Currently, little is known about the mechanisms underpinning the association between mindfulness, depression, and anxiety sensitivity. The aim of this study was to examine the role of genetic and environmental factors in trait mindfulness, and its genetic and environmental overlap with depressive symptoms and anxiety sensitivity. Over 2,100 16-year-old twins from a population-based study rated their mindfulness, depressive symptoms, and anxiety sensitivity. Twin modeling analyses revealed that mindfulness is 32% heritable and 66% due to nonshared environmental factors, with no significant influence of shared environment. Genetic influences explained over half of the moderate phenotypic associations between low mindfulness, depressive symptoms, and anxiety sensitivity. About two-thirds of genetic influences and almost all nonshared environmental influences on mindfulness were independent of depression and anxiety sensitivity. This is the first study to show that both genes and environment play an important role in the etiology of mindfulness in adolescence. Future research should identify the specific environmental factors that influence trait mindfulness during development to inform targeted treatment and resilience interventions. Shared genetic liability underpinning the co-occurrence of low mindfulness, depression, and anxiety sensitivity suggests that the biological pathways shared between these traits should also be examined. © 2015 The Authors. Depression and Anxiety published by Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Mitchell, James K.; Carter, William E.
2000-01-01
Describes using a computer statistical software package called Minitab to model the sensitivity of several microbes to the disinfectant NaOCl (Clorox') using the Kirby-Bauer technique. Each group of students collects data from one microbe, conducts regression analyses, then chooses the best-fit model based on the highest r-values obtained.…
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer
Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904
Characterizing Uncertainty and Variability in PBPK Models ...
Mode-of-action based risk and safety assessments can rely upon tissue dosimetry estimates in animals and humans obtained from physiologically-based pharmacokinetic (PBPK) modeling. However, risk assessment also increasingly requires characterization of uncertainty and variability; such characterization for PBPK model predictions represents a continuing challenge to both modelers and users. Current practices show significant progress in specifying deterministic biological models and the non-deterministic (often statistical) models, estimating their parameters using diverse data sets from multiple sources, and using them to make predictions and characterize uncertainty and variability. The International Workshop on Uncertainty and Variability in PBPK Models, held Oct 31-Nov 2, 2006, sought to identify the state-of-the-science in this area and recommend priorities for research and changes in practice and implementation. For the short term, these include: (1) multidisciplinary teams to integrate deterministic and non-deterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through more complete documentation of the model structure(s) and parameter values, the results of sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include: (1) theoretic and practical methodological impro
Coal gasification systems engineering and analysis, volume 2
NASA Technical Reports Server (NTRS)
1980-01-01
The major design related features of each generic plant system were characterized in a catalog. Based on the catalog and requirements data, approximately 17 designs and cost estimates were developed for MBG and alternate products. A series of generic trade studies was conducted to support all of the design studies. A set of cost and programmatic analyses were conducted to supplement the designs. The cost methodology employed for the design and sensitivity studies was documented and implemented in a computer program. Plant design and construction schedules were developed for the K-T, Texaco, and B&W MBG plant designs. A generic work breakdown structure was prepared, based on the K-T design, to coincide with TVA's planned management approach. An extensive set of cost sensitivity analyses was completed for K-T, Texaco, and B&W design. Product price competitiveness was evaluated for MBG and the alternate products. A draft management policy and procedures manual was evaluated. A supporting technology development plan was developed to address high technology risk issues. The issues were identified and ranked in terms of importance and tractability, and a plan developed for obtaining data or developing technology required to mitigate the risk.
Functional-diversity indices can be driven by methodological choices and species richness.
Poos, Mark S; Walker, Steven C; Jackson, Donald A
2009-02-01
Functional diversity is an important concept in community ecology because it captures information on functional traits absent in measures of species diversity. One popular method of measuring functional diversity is the dendrogram-based method, FD. To calculate FD, a variety of methodological choices are required, and it has been debated about whether biological conclusions are sensitive to such choices. We studied the probability that conclusions regarding FD were sensitive, and that patterns in sensitivity were related to alpha and beta components of species richness. We developed a randomization procedure that iteratively calculated FD by assigning species into two assemblages and calculating the probability that the community with higher FD varied across methods. We found evidence of sensitivity in all five communities we examined, ranging from a probability of sensitivity of 0 (no sensitivity) to 0.976 (almost completely sensitive). Variations in these probabilities were driven by differences in alpha diversity between assemblages and not by beta diversity. Importantly, FD was most sensitive when it was most useful (i.e., when differences in alpha diversity were low). We demonstrate that trends in functional-diversity analyses can be largely driven by methodological choices or species richness, rather than functional trait information alone.
Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert
2014-05-01
The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results.
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
Using archived ITS data for sensitivity analyses in the estimation of mobile source emissions
DOT National Transportation Integrated Search
2000-12-01
The study described in this paper demonstrates the use of archived ITS data from San Antonio's TransGuide traffic management center (TMC) for sensitivity analyses in the estimation of on-road mobile source emissions. Because of the stark comparison b...
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
Skin sensitizer identification by IL-8 secretion and CD86 expression on THP-1 cells.
Parise, Carolina Bellini; Sá-Rocha, Vanessa Moura; Moraes, Jane Zveiter
2015-12-25
Substantial progress has been made in the development of alternative methods for skin sensitization in the last decade in several countries around the world. Brazil is experiencing an increasing concern about using animals for product development, since the publication of the Law 9605/1998, which prohibits the use of animals when an alternative method is available. In this way, an in vitro test to evaluate allergenic potential is a pressing need.This preliminary study started setting the use of myelomonocytic THP-1 cell line, according to the human cell line activation test (h-CLAT), already under validation process. We found that 48-h chemical exposure was necessary to identify 22 out of 23 sensitizers by the analyses of CD86 expression. In addition, the CD54 expression analyses presented a poor efficiency to discriminate sensitizers from non-sensitizers in our conditions. In view of these results, we looked for changes of pro-inflammatory interleukin profile. The IL-8 secretion analyses after 24-h chemical incubation seemed to be an alternative for CD54 expression assessing.Altogether, our findings showed that the combination of the analyses of CD86 expression and IL-8 secretion allowed predicting allergenicity.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Assessing change in sensitivity of tropical vegetation to climate based on wavelet analysis
NASA Astrophysics Data System (ADS)
Claessen, J.; Martens, B.; Verhoest, N.; Molini, A.; Miralles, D. G.
2017-12-01
Vegetation dynamics are driven by climate, and at the same time they play a key role in forcing the different bio-geochemical cycles. As climate change leads to an increase in frequency and intensity of hydro-meteorological extremes, vegetation is expected to respond to these changes, and subsequently feed back on their occurrence. Future responses can be better understood by analysing the past using time series of different vegetation diagnostics observed from space, both in the optical and microwave domain. In this contribution, the climatic drivers (air temperature, precipitation, and incoming radiation) of these different vegetation diagnostics are analysed using a monthly global data-cube of 32 years at a 0.25° resolution. To do so, we analyse the wavelet coherence between each vegetation index and the climatic drivers of vegetation. The use of wavelet coherence allows unveiling the different response and sensitivity of the diverse vegetation indices to their climatic drivers, simultaneously in the time and frequency domains. Our results show that the wavelet-based statistics are suitable for extracting information from the different vegetation indices. Areas of high rainfall volumes are characterised by a strong control of radiation and temperature over vegetation. At higher latitudes, the positive trends in all vegetation diagnostics agree with the hypothesis of a greening pattern, which is coherent with the increase in temperature. At the same time, substantial differences can be observed between the responses of the different vegetation indices as well. As an example, the VOD - thought to be a close proxy for vegetation water content - shows a larger sensitivity to precipitation than traditional optical indices such as the NDVI. Further, important temporal changes in the wavelet coherence between vegetation and climate are identified. For instance, the Amazonian rainforest shows an increased correspondence with precipitation dynamics, indicating positive shifts in ecosystem sensitivity to water availability, which can arguably be related to an increase in the amplitude of the seasonal cycle in rainfall. These results are in line with the expected intensification of the water cycle due to climate change and point to the complex response of the biosphere to climatic changes.
Froud, Robert; Abel, Gary
2014-01-01
Background Receiver Operator Characteristic (ROC) curves are being used to identify Minimally Important Change (MIC) thresholds on scales that measure a change in health status. In quasi-continuous patient reported outcome measures, such as those that measure changes in chronic diseases with variable clinical trajectories, sensitivity and specificity are often valued equally. Notwithstanding methodologists agreeing that these should be valued equally, different approaches have been taken to estimating MIC thresholds using ROC curves. Aims and objectives We aimed to compare the different approaches used with a new approach, exploring the extent to which the methods choose different thresholds, and considering the effect of differences on conclusions in responder analyses. Methods Using graphical methods, hypothetical data, and data from a large randomised controlled trial of manual therapy for low back pain, we compared two existing approaches with a new approach that is based on the addition of the sums of squares of 1-sensitivity and 1-specificity. Results There can be divergence in the thresholds chosen by different estimators. The cut-point selected by different estimators is dependent on the relationship between the cut-points in ROC space and the different contours described by the estimators. In particular, asymmetry and the number of possible cut-points affects threshold selection. Conclusion Choice of MIC estimator is important. Different methods for choosing cut-points can lead to materially different MIC thresholds and thus affect results of responder analyses and trial conclusions. An estimator based on the smallest sum of squares of 1-sensitivity and 1-specificity is preferable when sensitivity and specificity are valued equally. Unlike other methods currently in use, the cut-point chosen by the sum of squares method always and efficiently chooses the cut-point closest to the top-left corner of ROC space, regardless of the shape of the ROC curve. PMID:25474472
Zhang, Xinke; Hay, Joel W; Niu, Xiaoli
2015-01-01
The aim of the study was to compare the cost effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and intramuscular (IM) interferon (IFN)-β(1a) as first-line therapies in the treatment of patients with relapsing-remitting multiple sclerosis (RRMS). A Markov model was developed to evaluate the cost effectiveness of disease-modifying drugs (DMDs) from a US societal perspective. The time horizon in the base case was 5 years. The primary outcome was incremental net monetary benefit (INMB), and the secondary outcome was incremental cost-effectiveness ratio (ICER). The base case INMB willingness-to-pay (WTP) threshold was assumed to be US$150,000 per quality-adjusted life year (QALY), and the costs were in 2012 US dollars. One-way sensitivity analyses and probabilistic sensitivity analysis were conducted to test the robustness of the model results. Dimethyl fumarate dominated all other therapies over the range of WTPs, from US$0 to US$180,000. Compared with IM IFN-β(1a), at a WTP of US$150,000, INMBs were estimated at US$36,567, US$49,780, and US$80,611 for fingolimod, teriflunomide, and dimethyl fumarate, respectively. The ICER of fingolimod versus teriflunomide was US$3,201,672. One-way sensitivity analyses demonstrated the model results were sensitive to the acquisition costs of DMDs and the time horizon, but in most scenarios, cost-effectiveness rankings remained stable. Probabilistic sensitivity analysis showed that for more than 90% of the simulations, dimethyl fumarate was the optimal therapy across all WTP values. The three oral therapies were favored in the cost-effectiveness analysis. Of the four DMDs, dimethyl fumarate was a dominant therapy to manage RRMS. Apart from dimethyl fumarate, teriflunomide was the most cost-effective therapy compared with IM IFN-β(1a), with an ICER of US$7,115.
Schwarz, Harald; Schmittner, Maria; Duschl, Albert; Horejs-Hoeck, Jutta
2014-01-01
Many commercially available recombinant proteins are produced in Escherichia coli, and most suppliers guarantee contamination levels of less than 1 endotoxin unit (EU). When we analysed commercially available proteins for their endotoxin content, we found contamination levels in the same range as generally stated in the data sheets, but also some that were higher. To analyse whether these low levels of contamination have an effect on immune cells, we stimulated the monocytic cell line THP-1, primary human monocytes, in vitro differentiated human monocyte-derived dendritic cells, and primary human CD1c+ dendritic cells (DCs) with very low concentrations of lipopolysaccharide (LPS; ranging from 0.002–2 ng/ml). We show that CD1c+ DCs especially can be activated by minimal amounts of LPS, equivalent to the levels of endotoxin contamination we detected in some commercially available proteins. Notably, the enhanced endotoxin sensitivity of CD1c+ DCs was closely correlated with high CD14 expression levels observed in CD1c+ DCs that had been maintained in cell culture medium for 24 hours. When working with cells that are particularly sensitive to LPS, even low endotoxin contamination may generate erroneous data. We therefore recommend that recombinant proteins be thoroughly screened for endotoxin contamination using the limulus amebocyte lysate test, fluorescence-based assays, or a luciferase based NF-κB reporter assay involving highly LPS-sensitive cells overexpressing TLR4, MD-2 and CD14. PMID:25478795
Németh, Bertalan; Józwiak-Hagymásy, Judit; Kovács, Gábor; Kovács, Attila; Demjén, Tibor; Huber, Manuel B; Cheung, Kei-Long; Coyle, Kathryn; Lester-George, Adam; Pokhrel, Subhash; Vokó, Zoltán
2018-01-25
To evaluate potential health and economic returns from implementing smoking cessation interventions in Hungary. The EQUIPTMOD, a Markov-based economic model, was used to assess the cost-effectiveness of three implementation scenarios: (a) introducing a social marketing campaign; (b) doubling the reach of existing group-based behavioural support therapies and proactive telephone support; and (c) a combination of the two scenarios. All three scenarios were compared with current practice. The scenarios were chosen as feasible options available for Hungary based on the outcome of interviews with local stakeholders. Life-time costs and quality-adjusted life years (QALYs) were calculated from a health-care perspective. The analyses used various return on investment (ROI) estimates, including incremental cost-effectiveness ratios (ICERs), to compare the scenarios. Probabilistic sensitivity analyses assessed the extent to which the estimated mean ICERs were sensitive to the model input values. Introducing a social marketing campaign resulted in an increase of 0.3014 additional quitters per 1 000 smokers, translating to health-care cost-savings of €0.6495 per smoker compared with current practice. When the value of QALY gains was considered, cost-savings increased to €14.1598 per smoker. Doubling the reach of existing group-based behavioural support therapies and proactive telephone support resulted in health-care savings of €0.2539 per smoker (€3.9620 with the value of QALY gains), compared with current practice. The respective figures for the combined scenario were €0.8960 and €18.0062. Results were sensitive to model input values. According to the EQUIPTMOD modelling tool, it would be cost-effective for the Hungarian authorities introduce a social marketing campaign and double the reach of existing group-based behavioural support therapies and proactive telephone support. Such policies would more than pay for themselves in the long term. © 2018 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Berggrund, Malin; Ekman, Daniel; Gustavsson, Inger; Sundfeldt, Karin; Olovsson, Matts; Enroth, Stefan; Gyllensten, Ulf
2016-01-01
The indicating FTA elute micro card™ has been developed to collect and stabilize the nucleic acid in biological samples and is widely used in human and veterinary medicine and other disciplines. This card is not recommended for protein analyses, since surface treatment may denature proteins. We studied the ability to analyse proteins in human plasma and vaginal fluid as applied to the indicating FTA elute micro card™ using the sensitive proximity extension assay (PEA). Among 92 proteins in the Proseek Multiplex Oncology Iv2 panel, 87 were above the limit of detection (LOD) in liquid plasma and 56 among 92 above LOD in plasma applied to FTA cards. Washing and protein elution protocols were compared to identify an optimal method. Liquid-based cytology samples showed a lower number of proteins above LOD than FTA cards with vaginal fluid samples applied. Our results demonstrate that samples applied to the indicating FTA elute micro card™ are amendable to protein analyses, given that a sensitive protein detection assay is used. The results imply that biological samples applied to FTA cards can be used for DNA, RNA and protein detection. PMID:28936257
Berggrund, Malin; Ekman, Daniel; Gustavsson, Inger; Sundfeldt, Karin; Olovsson, Matts; Enroth, Stefan; Gyllensten, Ulf
2016-01-01
The indicating FTA elute micro card™ has been developed to collect and stabilize the nucleic acid in biological samples and is widely used in human and veterinary medicine and other disciplines. This card is not recommended for protein analyses, since surface treatment may denature proteins. We studied the ability to analyse proteins in human plasma and vaginal fluid as applied to the indicating FTA elute micro card™ using the sensitive proximity extension assay (PEA). Among 92 proteins in the Proseek Multiplex Oncology Iv2 panel, 87 were above the limit of detection (LOD) in liquid plasma and 56 among 92 above LOD in plasma applied to FTA cards. Washing and protein elution protocols were compared to identify an optimal method. Liquid-based cytology samples showed a lower number of proteins above LOD than FTA cards with vaginal fluid samples applied. Our results demonstrate that samples applied to the indicating FTA elute micro card™ are amendable to protein analyses, given that a sensitive protein detection assay is used. The results imply that biological samples applied to FTA cards can be used for DNA, RNA and protein detection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolaczkowski, A.M.; Lambright, J.A.; Ferrell, W.L.
This document contains the internal event initiated accident sequence analyses for Peach Bottom, Unit 2; one of the reference plants being examined as part of the NUREG-1150 effort by the Nuclear Regulatory Commission. NUREG-1150 will document the risk of a selected group of nuclear power plants. As part of that work, this report contains the overall core damage frequency estimate for Peach Bottom, Unit 2, and the accompanying plant damage state frequencies. Sensitivity and uncertainty analyses provided additional insights regarding the dominant contributors to the Peach Bottom core damage frequency estimate. The mean core damage frequency at Peach Bottom wasmore » calculated to be 8.2E-6. Station blackout type accidents (loss of all ac power) were found to dominate the overall results. Anticipated Transient Without Scram accidents were also found to be non-negligible contributors. The numerical results are largely driven by common mode failure probability estimates and to some extent, human error. Because of significant data and analysis uncertainties in these two areas (important, for instance, to the most dominant scenario in this study), it is recommended that the results of the uncertainty and sensitivity analyses be considered before any actions are taken based on this analysis.« less
Tucker, Raymond P; Lengel, Greg J; Smith, Caitlin E; Capron, Dan W; Mullins-Sweatt, Stephanie N; Wingate, LaRicka R
2016-12-30
The current study investigated the relationship between maladaptive Five-Factor Model (FFM) personality traits, anxiety sensitivity cognitive concerns, and suicide ideation in a sample of 131 undergraduate students who were selected based on their scores on a screening questionnaire regarding Borderline Personality Disorder (BPD) symptoms. Those who endorsed elevated BPD symptoms in a pre-screen analyses completed at the beginning of each semester were oversampled in comparison to those with low or moderate symptoms. Indirect effect (mediation) results indicated that the maladaptive personality traits of anxious/uncertainty, dysregulated anger, self-disturbance, behavioral dysregulation, dissociative tendencies, distrust, manipulativeness, oppositional, and rashness had indirect effects on suicide ideation through anxiety sensitivity cognitive concerns. All of these personality traits correlated to suicide ideation as well. The maladaptive personality traits of despondence, affective dysregulation, and fragility were positive correlates of suicide ideation and predicted suicide ideation when all traits were entered in one linear regression model, but were not indirectly related through anxiety sensitivity cognitive concerns. The implication for targeting anxiety sensitivity cognitive concerns in evidence-based practices for reducing suicide risk in those with BPD is discussed. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
Urrego-Blanco, Jorge Rolando; Urban, Nathan Mark; Hunke, Elizabeth Clare; ...
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual modelmore » parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. Lastly, it is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.« less
Uncertainty quantification and global sensitivity analysis of the Los Alamos sea ice model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, Jorge R.; Urban, Nathan M.; Hunke, Elizabeth C.; Turner, Adrian K.; Jeffery, Nicole
2016-04-01
Changes in the high-latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with midlatitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. We present a quantitative way to assess uncertainty in complex computer models, which is a new approach in the analysis of sea ice models. We characterize parametric uncertainty in the Los Alamos sea ice model (CICE) in a standalone configuration and quantify the sensitivity of sea ice area, extent, and volume with respect to uncertainty in 39 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one at a time, this study uses a global variance-based approach in which Sobol' sequences are used to efficiently sample the full 39-dimensional parameter space. We implement a fast emulator of the sea ice model whose predictions of sea ice extent, area, and volume are used to compute the Sobol' sensitivity indices of the 39 parameters. Main effects and interactions among the most influential parameters are also estimated by a nonparametric regression technique based on generalized additive models. A ranking based on the sensitivity indices indicates that model predictions are most sensitive to snow parameters such as snow conductivity and grain size, and the drainage of melt ponds. It is recommended that research be prioritized toward more accurately determining these most influential parameter values by observational studies or by improving parameterizations in the sea ice model.
Sleep System Sensitization: Evidence for Changing Roles of Etiological Factors in Insomnia
Kalmbach, David A.; Pillai, Vivek; Arnedt, J. Todd; Anderson, Jason R.; Drake, Christopher L.
2016-01-01
Objectives To test for sensitization of the sleep system in response to insomnia development and major life stress. In addition, to evaluate the impact on depression and anxiety associated with sleep system sensitization. Methods A longitudinal study with three annual assessments. The community-based sample included 262 adults with no history of insomnia or depression who developed insomnia 1 year after baseline (67.6% female; 44.0±13.4y). Measures included the Ford Insomnia Response to Stress Test to assess sleep reactivity, Quick Inventory of Depressive Symptomatology, and Beck Anxiety Inventory. Insomnia classification was based on DSM-IV criteria. Sleep system sensitization was operationally defined as significant increases in sleep reactivity. Results Sensitization of the sleep system was observed from baseline to insomnia onset at 1-y follow-up among insomniacs with low premorbid vulnerability (p<.001), resulting in 68.3% of these individuals re-classified as highly sleep reactive. Major life stress was associated with greater sleep system sensitization (p=.02). Results showed that sleep reactivity at 2-y follow-up remained elevated among those with low premorbid vulnerability, even after insomnia remission (p<.01). Finally, analyses revealed that increases in sleep reactivity predicted greater depression (p<.001) and anxiety (p<.001) at insomnia onset. The impact of sensitization on depression was stable at 2-y follow-up (p=.01). Conclusions Evidence supports sensitization of the sleep system as consequence of insomnia development and major life stress among individuals with low premorbid sleep reactivity. Sleep system sensitization may serve as a mechanism by which insomnia is perpetuated. Harmful effects of the sensitization process may increase risk for insomnia-related depression and anxiety. PMID:27448474
Sleep system sensitization: evidence for changing roles of etiological factors in insomnia.
Kalmbach, David A; Pillai, Vivek; Arnedt, J Todd; Anderson, Jason R; Drake, Christopher L
2016-05-01
To test for sensitization of the sleep system in response to insomnia development and major life stress. In addition, to evaluate the impact on depression and anxiety associated with sleep system sensitization. A longitudinal study with three annual assessments. The community-based sample included 262 adults with no history of insomnia or depression who developed insomnia one year after baseline (67.6% female; 44.0 ± 13.4 yr). Measures included the Ford Insomnia Response to Stress Test to assess sleep reactivity, Quick Inventory of Depressive Symptomatology, and Beck Anxiety Inventory. Insomnia classification was based on DSM-IV criteria. Sleep system sensitization was operationally defined as significant increases in sleep reactivity. Sensitization of the sleep system was observed from baseline to insomnia onset at 1-yr follow-up among insomniacs with low premorbid vulnerability (p < 0.001), resulting in 68.3% of these individuals re-classified as highly sleep reactive. Major life stress was associated with greater sleep system sensitization (p = 0.02). Results showed that sleep reactivity at 2-yr follow-up remained elevated among those with low premorbid vulnerability, even after insomnia remission (p < 0.01). Finally, analyses revealed that increases in sleep reactivity predicted greater depression (p < 0.001) and anxiety (p < 0.001) at insomnia onset. The impact of sensitization on depression was stable at 2-yr follow-up (p = 0.01). Evidence supports sensitization of the sleep system as a consequence of insomnia development and major life stress among individuals with low premorbid sleep reactivity. Sleep system sensitization may serve as a mechanism by which insomnia is perpetuated. Harmful effects of the sensitization process may increase risk for insomnia-related depression and anxiety. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimizing chronic disease management mega-analysis: economic evaluation.
2013-01-01
As Ontario's population ages, chronic diseases are becoming increasingly common. There is growing interest in services and care models designed to optimize the management of chronic disease. To evaluate the cost-effectiveness and expected budget impact of interventions in chronic disease cohorts evaluated as part of the Optimizing Chronic Disease Management mega-analysis. Sector-specific costs, disease incidence, and mortality were calculated for each condition using administrative databases from the Institute for Clinical Evaluative Sciences. Intervention outcomes were based on literature identified in the evidence-based analyses. Quality-of-life and disease prevalence data were obtained from the literature. Analyses were restricted to interventions that showed significant benefit for resource use or mortality from the evidence-based analyses. An Ontario cohort of patients with each chronic disease was constructed and followed over 5 years (2006-2011). A phase-based approach was used to estimate costs across all sectors of the health care system. Utility values identified in the literature and effect estimates for resource use and mortality obtained from the evidence-based analyses were applied to calculate incremental costs and quality-adjusted life-years (QALYs). Given uncertainty about how many patients would benefit from each intervention, a system-wide budget impact was not determined. Instead, the difference in lifetime cost between an individual-administered intervention and no intervention was presented. Of 70 potential cost-effectiveness analyses, 8 met our inclusion criteria. All were found to result in QALY gains and cost savings compared with usual care. The models were robust to the majority of sensitivity analyses undertaken, but due to structural limitations and time constraints, few sensitivity analyses were conducted. Incremental cost savings per patient who received intervention ranged between $15 per diabetic patient with specialized nursing to $10,665 per patient wth congestive heart failure receiving in-home care. Evidence used to inform estimates of effect was often limited to a single trial with limited generalizability across populations, interventions, and health care systems. Because of the low clinical fidelity of health administrative data sets, intermediate clinical outcomes could not be included. Cohort costs included an average of all health care costs and were not restricted to costs associated with the disease. Intervention costs were based on resource use specified in clinical trials. Applying estimates of effect from the evidence-based analyses to real-world resource use resulted in cost savings for all interventions. On the basis of quality-of-life data identified in the literature, all interventions were found to result in a greater QALY gain than usual care would. Implementation of all interventions could offer significant cost reductions. However, this analysis was subject to important limitations. Chronic diseases are the leading cause of death and disability in Ontario. They account for a third of direct health care costs across the province. This study aims to evaluate the cost-effectiveness of health care interventions that might improve the management of chronic diseases. The evaluated interventions led to lower costs and better quality of life than usual care. Offering these options could reduce costs per patient. However, the studies used in this analysis were of medium to very low quality, and the methods had many limitations.
Analysis of D-penicillamine by gas chromatography utilizing nitrogen--phosphorus detection.
Rushing, L G; Hansen, E B; Thompson, H C
1985-01-11
A method is presented for the analysis of the "orphan" drug D-penicillamine (D-Pa), which is used for the treatment of the inherited rare copper metabolism dysfunction known as Wilson's disease, by assaying a derivative of the compound by gas chromatography employing a rubidium sensitized nitrogen--phosphorus detector. Analytical procedures are described for the analyses of residues of D-Pa X HCl salt in animal feed and for the analyses of the salt or free base from aqueous solutions by utilizing a single-step double derivatization with diazomethane--acetone. Stability data for D-Pa X HCl in animal feed and for the free base in water are presented. An ancillary fluorescence derivatization procedure for the analysis of D-Pa in water is also reported.
Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan
2014-01-01
Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.
Zeng, Xiaohui; Peng, Liubao; Li, Jianhe; Chen, Gannong; Tan, Chongqing; Wang, Siying; Wan, Xiaomin; Ouyang, Lihui; Zhao, Ziying
2013-01-01
Continuation maintenance treatment with pemetrexed is approved by current clinical guidelines as a category 2A recommendation after induction therapy with cisplatin and pemetrexed chemotherapy (CP strategy) for patients with advanced nonsquamous non-small-cell lung cancer (NSCLC). However, the cost-effectiveness of the treatment remains unclear. We completed a trial-based assessment, from the perspective of the Chinese health care system, of the cost-effectiveness of maintenance pemetrexed treatment after a CP strategy for patients with advanced nonsquamous NSCLC. A Markov model was developed to estimate costs and benefits. It was based on a clinical trial that compared continuation maintenance pemetrexed therapy plus best supportive care (BSC) versus placebo plus BSC after a CP strategy for advanced nonsquamous NSCLC. Sensitivity analyses were conducted to assess the stability of the model. The model base case analysis suggested that continuation maintenance pemetrexed therapy after a CP strategy would increase benefits in a 1-, 2-, 5-, or 10-year time horizon, with incremental costs of $183,589.06, $126,353.16, $124,766.68, and $124,793.12 per quality-adjusted life-year gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was the utility of the progression-free survival state, followed by proportion of patients with postdiscontinuation therapy in both arms, proportion of BSC costs for PFS versus progressed survival state, and cost of pemetrexed. Probabilistic sensitivity analysis indicated that the cost-effective probability of adding continuation maintenance pemetrexed therapy to BSC was zero. One-way and probabilistic sensitivity analyses revealed that the Markov model was robust. Continuation maintenance of pemetrexed after a CP strategy for patients with advanced nonsquamous NSCLC is not cost-effective based on a recent clinical trial. Decreasing the price or adjusting the dosage of pemetrexed may be a better option for meeting the treatment demands of Chinese patients. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.
Latham, Nancy K.; Jette, Alan M.; Wagenaar, Robert C.; Ni, Pengsheng; Slavin, Mary D.; Bean, Jonathan F.
2012-01-01
Background Impaired balance has a significant negative impact on mobility, functional independence, and fall risk in older adults. Although several, well-respected balance measures are currently in use, there is limited evidence regarding the most appropriate measure to assess change in community-dwelling older adults. Objective The aim of this study was to compare floor and ceiling effects, sensitivity to change, and responsiveness across the following balance measures in community-dwelling elderly people with functional limitations: Berg Balance Scale (BBS), Performance-Oriented Mobility Assessment total scale (POMA-T), POMA balance subscale (POMA-B), and Dynamic Gait Index (DGI). Design Retrospective data from a 16-week exercise trial were used. Secondary analyses were conducted on the total sample and by subgroups of baseline functional limitation or baseline balance scores. Methods Participants were 111 community-dwelling older adults 65 years of age or older, with functional limitations. Sensitivity to change was assessed using effect size, standardized response mean, and paired t tests. Responsiveness was assessed using minimally important difference (MID) estimates. Results No floor effects were noted. Ceiling effects were observed on all measures, including in people with moderate to severe functional limitations. The POMA-T, POMA-B, and DGI showed significantly larger ceiling effects compared with the BBS. All measures had low sensitivity to change in total sample analyses. Subgroup analyses revealed significantly better sensitivity to change in people with lower compared with higher baseline balance scores. Although both the total sample and lower baseline balance subgroups showed statistically significant improvement from baseline to 16 weeks on all measures, only the lower balance subgroup showed change scores that consistently exceeded corresponding MID estimates. Limitations This study was limited to comparing 4 measures of balance, and anchor-based methods for assessing MID could not be reported. Conclusions Important limitations, including ceiling effects and relatively low sensitivity to change and responsiveness, were noted across all balance measures, highlighting their limited utility across the full spectrum of the community-dwelling elderly population. New, more challenging measures are needed for better discrimination of balance ability in community-dwelling elderly people at higher functional levels. PMID:22114200
Dijkstra, Siebren; Govers, Tim M; Hendriks, Rianne J; Schalken, Jack A; Van Criekinge, Wim; Van Neste, Leander; Grutters, Janneke P C; Sedelaar, John P Michiel; van Oort, Inge M
2017-11-01
To assess the cost-effectiveness of a new urinary biomarker-based risk score (SelectMDx; MDxHealth, Inc., Irvine, CA, USA) to identify patients for transrectal ultrasonography (TRUS)-guided biopsy and to compare this with the current standard of care (SOC), using only prostate-specific antigen (PSA) to select for TRUS-guided biopsy. A decision tree and Markov model were developed to evaluate the cost-effectiveness of SelectMDx as a reflex test vs SOC in men with a PSA level of >3 ng/mL. Transition probabilities, utilities and costs were derived from the literature and expert opinion. Cost-effectiveness was expressed in quality-adjusted life years (QALYs) and healthcare costs of both diagnostic strategies, simulating the course of patients over a time horizon representing 18 years. Deterministic sensitivity analyses were performed to address uncertainty in assumptions. A diagnostic strategy including SelectMDx with a cut-off chosen at a sensitivity of 95.7% for high-grade prostate cancer resulted in savings of €128 and a gain of 0.025 QALY per patient compared to the SOC strategy. The sensitivity analyses showed that the disutility assigned to active surveillance had a high impact on the QALYs gained and the disutility attributed to TRUS-guided biopsy only slightly influenced the outcome of the model. Based on the currently available evidence, the reduction of over diagnosis and overtreatment due to the use of the SelectMDx test in men with PSA levels of >3 ng/mL may lead to a reduction in total costs per patient and a gain in QALYs. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.
Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo
2013-07-01
The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.
Huppertz-Hauss, Gert; Aas, Eline; Lie Høivik, Marte; Langholz, Ebbe; Odes, Selwyn; Småstuen, Milada; Stockbrugger, Reinhold; Hoff, Geir; Moum, Bjørn; Bernklev, Tomm
2016-01-01
Background. The treatment of chronic inflammatory bowel disease (IBD) is costly, and limited resources call for analyses of the cost effectiveness of therapeutic interventions. The present study evaluated the equivalency of the Short Form 6D (SF-6D) and the Euro QoL (EQ-5D), two preference-based HRQoL instruments that are broadly used in cost-effectiveness analyses, in an unselected IBD patient population. Methods. IBD patients from seven European countries were invited to a follow-up visit ten years after their initial diagnosis. Clinical and demographic data were assessed, and the Short Form 36 (SF-36) was employed. Utility scores were obtained by calculating the SF-6D index values from the SF-36 data for comparison with the scores obtained with the EQ-5D questionnaire. Results. The SF-6D and EQ-5D provided good sensitivities for detecting disease activity-dependent utility differences. However, the single-measure intraclass correlation coefficient was 0.58, and the Bland-Altman plot indicated numerous values beyond the limits of agreement. Conclusions. There was poor agreement between the measures retrieved from the EQ-5D and the SF-6D utility instruments. Although both instruments may provide good sensitivity for the detection of disease activity-dependent utility differences, the instruments cannot be used interchangeably. Cost-utility analyses performed with only one utility instrument must be interpreted with caution.
Aas, Eline; Odes, Selwyn; Småstuen, Milada; Stockbrugger, Reinhold; Hoff, Geir; Moum, Bjørn; Bernklev, Tomm
2016-01-01
Background. The treatment of chronic inflammatory bowel disease (IBD) is costly, and limited resources call for analyses of the cost effectiveness of therapeutic interventions. The present study evaluated the equivalency of the Short Form 6D (SF-6D) and the Euro QoL (EQ-5D), two preference-based HRQoL instruments that are broadly used in cost-effectiveness analyses, in an unselected IBD patient population. Methods. IBD patients from seven European countries were invited to a follow-up visit ten years after their initial diagnosis. Clinical and demographic data were assessed, and the Short Form 36 (SF-36) was employed. Utility scores were obtained by calculating the SF-6D index values from the SF-36 data for comparison with the scores obtained with the EQ-5D questionnaire. Results. The SF-6D and EQ-5D provided good sensitivities for detecting disease activity-dependent utility differences. However, the single-measure intraclass correlation coefficient was 0.58, and the Bland-Altman plot indicated numerous values beyond the limits of agreement. Conclusions. There was poor agreement between the measures retrieved from the EQ-5D and the SF-6D utility instruments. Although both instruments may provide good sensitivity for the detection of disease activity-dependent utility differences, the instruments cannot be used interchangeably. Cost-utility analyses performed with only one utility instrument must be interpreted with caution. PMID:27630711
Development of the multiple sclerosis (MS) early mobility impairment questionnaire (EMIQ).
Ziemssen, Tjalf; Phillips, Glenn; Shah, Ruchit; Mathias, Adam; Foley, Catherine; Coon, Cheryl; Sen, Rohini; Lee, Andrew; Agarwal, Sonalee
2016-10-01
The Early Mobility Impairment Questionnaire (EMIQ) was developed to facilitate early identification of mobility impairments in multiple sclerosis (MS) patients. We describe the initial development of the EMIQ with a focus on the psychometric evaluation of the questionnaire using classical and item response theory methods. The initial 20-item EMIQ was constructed by clinical specialists and qualitatively tested among people with MS and physicians via cognitive interviews. Data from an observational study was used to make additional updates to the instrument based on exploratory factor analysis (EFA) and item response theory (IRT) analysis, and psychometric analyses were performed to evaluate the reliability and validity of the final instrument's scores and screening properties (i.e., sensitivity and specificity). Based on qualitative interview analyses, a revised 15-item EMIQ was included in the observational study. EFA, IRT and item-to-item correlation analyses revealed redundant items which were removed leading to the final nine-item EMIQ. The nine-item EMIQ performed well with respect to: test-retest reliability (ICC = 0.858); internal consistency (α = 0.893); convergent validity; and known-groups methods for construct validity. A cut-point of 41 on the 0-to-100 scale resulted in sufficient sensitivity and specificity statistics for viably identifying patients with mobility impairment. The EMIQ is a content valid and psychometrically sound instrument for capturing MS patients' experience with mobility impairments in a clinical practice setting. Additional research is suggested to further confirm the EMIQ's screening properties over time.
Cost of Equity Estimation in Fuel and Energy Sector Companies Based on CAPM
NASA Astrophysics Data System (ADS)
Kozieł, Diana; Pawłowski, Stanisław; Kustra, Arkadiusz
2018-03-01
The article presents cost of equity estimation of capital groups from the fuel and energy sector, listed at the Warsaw Stock Exchange, based on the Capital Asset Pricing Model (CAPM). The objective of the article was to perform a valuation of equity with the application of CAPM, based on actual financial data and stock exchange data and to carry out a sensitivity analysis of such cost, depending on the financing structure of the entity. The objective of the article formulated in this manner has determined its' structure. It focuses on presentation of substantive analyses related to the core of equity and methods of estimating its' costs, with special attention given to the CAPM. In the practical section, estimation of cost was performed according to the CAPM methodology, based on the example of leading fuel and energy companies, such as Tauron GE and PGE. Simultaneously, sensitivity analysis of such cost was performed depending on the structure of financing the company's operation.
Trends in mass spectrometry instrumentation for proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Richard D.
2002-12-01
Mass spectrometry has become a primary tool for proteomics due to its capabilities for rapid and sensitive protein identification and quantitation. It is now possible to identify thousands of proteins from microgram sample quantities in a single day and to quantify relative protein abundances. However, the needs for increased capabilities for proteome measurements are immense and are now driving both new strategies and instrument advances. These developments include those based on integration with multi-dimensional liquid separations and high accuracy mass measurements, and promise more than order of magnitude improvements in sensitivity, dynamic range, and throughput for proteomic analyses in themore » near future.« less
Ropars, Pascale; Angers-Blondin, Sandra; Gagnon, Marianne; Myers-Smith, Isla H; Lévesque, Esther; Boudreau, Stéphane
2017-08-01
Shrub densification has been widely reported across the circumpolar arctic and subarctic biomes in recent years. Long-term analyses based on dendrochronological techniques applied to shrubs have linked this phenomenon to climate change. However, the multi-stemmed structure of shrubs makes them difficult to sample and therefore leads to non-uniform sampling protocols among shrub ecologists, who will favor either root collars or stems to conduct dendrochronological analyses. Through a comparative study of the use of root collars and stems of Betula glandulosa, a common North American shrub species, we evaluated the relative sensitivity of each plant part to climate variables and assessed whether this sensitivity is consistent across three different types of environments in northwestern Québec, Canada (terrace, hilltop and snowbed). We found that root collars had greater sensitivity to climate than stems and that these differences were maintained across the three types of environments. Growth at the root collar was best explained by spring precipitation and summer temperature, whereas stem growth showed weak and inconsistent responses to climate variables. Moreover, sensitivity to climate was not consistent among plant parts, as individuals having climate-sensitive root collars did not tend to have climate-sensitive stems. These differences in sensitivity of shrub parts to climate highlight the complexity of resource allocation in multi-stemmed plants. Whereas stem initiation and growth are driven by microenvironmental variables such as light availability and competition, root collars integrate the growth of all plant parts instead, rendering them less affected by mechanisms such as competition and more responsive to signals of global change. Although further investigations are required to determine the degree to which these findings are generalizable across the tundra biome, our results indicate that consistency and caution in the choice of plant parts are a key consideration for the success of future dendroclimatological studies on shrubs. © 2017 John Wiley & Sons Ltd.
Sensitivity analysis of static resistance of slender beam under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valeš, Jan
2016-06-08
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
Prediction of coefficients of thermal expansion for unidirectional composites
NASA Technical Reports Server (NTRS)
Bowles, David E.; Tompkins, Stephen S.
1989-01-01
Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.
NASA Astrophysics Data System (ADS)
Guo, Wenzhang; Wang, Hao; Wu, Zhengping
2018-03-01
Most existing cascading failure mitigation strategy of power grids based on complex network ignores the impact of electrical characteristics on dynamic performance. In this paper, the robustness of the power grid under a power decentralization strategy is analysed through cascading failure simulation based on AC flow theory. The flow-sensitive (FS) centrality is introduced by integrating topological features and electrical properties to help determine the siting of the generation nodes. The simulation results of the IEEE-bus systems show that the flow-sensitive centrality method is a more stable and accurate approach and can enhance the robustness of the network remarkably. Through the study of the optimal flow-sensitive centrality selection for different networks, we find that the robustness of the network with obvious small-world effect depends more on contribution of the generation nodes detected by community structure, otherwise, contribution of the generation nodes with important influence on power flow is more critical. In addition, community structure plays a significant role in balancing the power flow distribution and further slowing the propagation of failures. These results are useful in power grid planning and cascading failure prevention.
Application of micro-X-ray fluorescence to chemical mapping of polar ice
NASA Astrophysics Data System (ADS)
Fourcade, M. C. Morel; Barnola, J. M.; Susini, J.; Baker, R.; Durand, G.; de Angelis, M.; Duval, P.
Synchrotron-based micro-X-ray fluorescence (μXRF) equipment has been used to analyze impurities in polar ice. A customized sample holder has been developed and the μXRF equipment has been adapted with a thermal control system to keep samples unaltered during analyses. Artificial ice samples prepared from ultra-pure water were analyzed to investigate possible contamination and/or experimental artefacts. Analyses of polar ice from Antarctica (Dome C and Vostok) confirm this μXRF technique is non-destructive and sensitive. Experiments can be reproduced to confirm or refine results by focusing on interesting spots such as crystal grain boundaries or specific inclusions. Integration times and resolution can be adjusted to optimize sensitivity. Investigation of unstable particles is possible due to the short analysis time. In addition to identification of elements in impurities, μXRF is able to determine their speciations. The accuracy and reliability of the results confirm the potential of this technique for research in glaciology.
Influence of Cobalt on the Properties of Load-Sensitive Magnesium Alloys
Klose, Christian; Demminger, Christian; Mroz, Gregor; Reimche, Wilfried; Bach, Friedrich-Wilhelm; Maier, Hans Jürgen; Kerber, Kai
2013-01-01
In this study, magnesium is alloyed with varying amounts of the ferromagnetic alloying element cobalt in order to obtain lightweight load-sensitive materials with sensory properties which allow an online-monitoring of mechanical forces applied to components made from Mg-Co alloys. An optimized casting process with the use of extruded Mg-Co powder rods is utilized which enables the production of magnetic magnesium alloys with a reproducible Co concentration. The efficiency of the casting process is confirmed by SEM analyses. Microstructures and Co-rich precipitations of various Mg-Co alloys are investigated by means of EDS and XRD analyses. The Mg-Co alloys' mechanical strengths are determined by tensile tests. Magnetic properties of the Mg-Co sensor alloys depending on the cobalt content and the acting mechanical load are measured utilizing the harmonic analysis of eddy-current signals. Within the scope of this work, the influence of the element cobalt on magnesium is investigated in detail and an optimal cobalt concentration is defined based on the performed examinations. PMID:23344376
Can we use high precision metal isotope analysis to improve our understanding of cancer?
Larner, Fiona
2016-01-01
High precision natural isotope analyses are widely used in geosciences to trace elemental transport pathways. The use of this analytical tool is increasing in nutritional and disease-related research. In recent months, a number of groups have shown the potential this technique has in providing new observations for various cancers when applied to trace metal metabolism. The deconvolution of isotopic signatures, however, relies on mathematical models and geochemical data, which are not representative of the system under investigation. In addition to relevant biochemical studies of protein-metal isotopic interactions, technological development both in terms of sample throughput and detection sensitivity of these elements is now needed to translate this novel approach into a mainstream analytical tool. Following this, essential background healthy population studies must be performed, alongside observational, cross-sectional disease-based studies. Only then can the sensitivity and specificity of isotopic analyses be tested alongside currently employed methods, and important questions such as the influence of cancer heterogeneity and disease stage on isotopic signatures be addressed.
Davies, E Bethan; Morriss, Richard; Glazebrook, Cris
2014-05-16
Depression and anxiety are common mental health difficulties experienced by university students and can impair academic and social functioning. Students are limited in seeking help from professionals. As university students are highly connected to digital technologies, Web-based and computer-delivered interventions could be used to improve students' mental health. The effectiveness of these intervention types requires investigation to identify whether these are viable prevention strategies for university students. The intent of the study was to systematically review and analyze trials of Web-based and computer-delivered interventions to improve depression, anxiety, psychological distress, and stress in university students. Several databases were searched using keywords relating to higher education students, mental health, and eHealth interventions. The eligibility criteria for studies included in the review were: (1) the study aimed to improve symptoms relating to depression, anxiety, psychological distress, and stress, (2) the study involved computer-delivered or Web-based interventions accessed via computer, laptop, or tablet, (3) the study was a randomized controlled trial, and (4) the study was trialed on higher education students. Trials were reviewed and outcome data analyzed through random effects meta-analyses for each outcome and each type of trial arm comparison. Cochrane Collaboration risk of bias tool was used to assess study quality. A total of 17 trials were identified, in which seven were the same three interventions on separate samples; 14 reported sufficient information for meta-analysis. The majority (n=13) were website-delivered and nine interventions were based on cognitive behavioral therapy (CBT). A total of 1795 participants were randomized and 1480 analyzed. Risk of bias was considered moderate, as many publications did not sufficiently report their methods and seven explicitly conducted completers' analyses. In comparison to the inactive control, sensitivity meta-analyses supported intervention in improving anxiety (pooled standardized mean difference [SMD] -0.56; 95% CI -0.77 to -0.35, P<.001), depression (pooled SMD -0.43; 95% CI -0.63 to -0.22, P<.001), and stress (pooled SMD -0.73; 95% CI -1.27 to -0.19, P=.008). In comparison to active controls, sensitivity analyses did not support either condition for anxiety (pooled SMD -0.18; 95% CI -0.98 to 0.62, P=.66) or depression (pooled SMD -0.28; 95% CI -0.75 to -0.20, P=.25). In contrast to a comparison intervention, neither condition was supported in sensitivity analyses for anxiety (pooled SMD -0.10; 95% CI -0.39 to 0.18, P=.48) or depression (pooled SMD -0.33; 95% CI -0.43 to 1.09, P=.40). The findings suggest Web-based and computer-delivered interventions can be effective in improving students' depression, anxiety, and stress outcomes when compared to inactive controls, but some caution is needed when compared to other trial arms and methodological issues were noticeable. Interventions need to be trialed on more heterogeneous student samples and would benefit from user evaluation. Future trials should address methodological considerations to improve reporting of trial quality and address post-intervention skewed data.
Schreurs, Bert; Guenter, Hannes; Hülsheger, Ute; van Emmerik, Hetty
2014-01-01
In this diary study, we tested the possibility that dispositional reward and punishment sensitivity, two central constructs of reinforcement sensitivity theory, would modify the relationship between emotional labor and job-related well-being (i.e., work engagement, emotional exhaustion, depersonalization). Specifically, based on a social functional account of emotion, we hypothesized that surface acting entails the risk of social disapproval and therefore may be more detrimental for high than for low punishment-sensitive individuals. In contrast, deep acting is hypothesized to hold the promise of social approval and therefore may be more beneficial for high than for low reward-sensitive individuals. Hypotheses were tested in a sample of 237 service workers (N = 1,584 daily reports) who completed a general survey and daily surveys over the course of 10 working days. Multilevel analyses showed that surface acting was detrimental to well-being, and more strongly so for high than for low punishment-sensitive individuals. The results are consistent with the idea that heightened sensitivity to social disapproval aggravates the negative effects of surface acting. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Dolk, Christiaan; Eichner, Martin; Welte, Robert; Anastassopoulou, Anastassia; Van Bellinghen, Laure-Anne; Poulsen Nautrup, Barbara; Van Vlaenderen, Ilse; Schmidt-Ott, Ruprecht; Schwehm, Markus; Postma, Maarten
2016-12-01
Seasonal influenza infection is primarily caused by circulation of two influenza A strain subtypes and strains from two B lineages that vary each year. Trivalent influenza vaccine (TIV) contains only one of the two B-lineage strains, resulting in mismatches between vaccine strains and the predominant circulating B lineage. Quadrivalent influenza vaccine (QIV) includes both B-lineage strains. The objective was to estimate the cost-utility of introducing QIV to replace TIV in Germany. An individual-based dynamic transmission model (4Flu) using German data was used to provide realistic estimates of the impact of TIV and QIV on age-specific influenza infections. Cases were linked to health and economic outcomes to calculate the cost-utility of QIV versus TIV, from both a societal and payer perspective. Costs and effects were discounted at 3.0 and 1.5 % respectively, with 2014 as the base year. Univariate and probabilistic sensitivity analyses were conducted. Using QIV instead of TIV resulted in additional quality-adjusted life-years (QALYs) and cost savings from the societal perspective (i.e. it represents the dominant strategy) and an incremental cost-utility ratio (ICUR) of €14,461 per QALY from a healthcare payer perspective. In all univariate analyses, QIV remained cost-effective (ICUR <€50,000). In probabilistic sensitivity analyses, QIV was cost-effective in >98 and >99 % of the simulations from the societal and payer perspective, respectively. This analysis suggests that QIV in Germany would provide additional health gains while being cost-saving to society or costing €14,461 per QALY gained from the healthcare payer perspective, compared with TIV.
Papaioannou, A.; Thompson, M. F.; Pasquale, M. K.; Adachi, J. D.
2016-01-01
Summary The RisedronatE and ALendronate (REAL) study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data from real-world clinical practice. Using a published osteoporosis model, the researchers found risedronate to be cost-effective compared to generic or brand alendronate for the treatment of Canadian postmenopausal osteoporosis in patients aged 65 years or older. Introduction The REAL study provides robust data on the real-world performance of risedronate and alendronate. The study used these data to assess the cost-effectiveness of brand risedronate versus generic or brand alendronate for treatment of Canadian postmenopausal osteoporosis patients aged 65 years or older. Methods A previously published osteoporosis model was populated with Canadian cost and epidemiological data, and the estimated fracture risk was validated. Effectiveness data were derived from REAL and utility data from published sources. The incremental cost per quality-adjusted life-year (QALY) gained was estimated from a Canadian public payer perspective, and comprehensive sensitivity analyses were conducted. Results The base case analysis found fewer fractures and more QALYs in the risedronate cohort, providing an incremental cost per QALY gained of $3,877 for risedronate compared to generic alendronate. The results were most sensitive to treatment duration and effectiveness. Conclusions The REAL study provided a unique opportunity to conduct cost-effectiveness analyses based on effectiveness data taken from real-world clinical practice. The analysis supports the cost-effectiveness of risedronate compared to generic or brand alendronate and the use of risedronate for the treatment of osteoporotic Canadian women aged 65 years or older with a BMD T-score ≤−2.5. PMID:18008100
Stephanie K. Moore; Nathan J. Mantua; Barbara M. Hickey; Vera L. Trainer
2009-01-01
Temporal and spatial trends in paralytic shellfish toxins (PSTs) in Puget Sound shellfish and their relationships with climate are investigated using long-term monitoring data since 1957. Data are selected for trend analyses based on the sensitivity of shellfish species to PSTs and their depuration rates, and the frequency of sample collection at individual sites....
Data to DecisionsTerminate, Tolerate, Transfer, or Treat
2016-07-25
and patching, a risk-based cyber - security decision model that enables a pre- dictive capability to respond to impending cyber -attacks is needed...States. This sensitive data includes business proprietary information on key programs of record and infrastructure, including government documents at...leverage nationally. The Institute for Defense Analyses (IDA) assisted the DoD CIO in formalizing a proof of concept for cyber initiatives and
Cognitive and Neural Bases of Skilled Performance.
1987-10-04
advantage is that this method is not computationally demanding, and model -specific analyses such as high -precision source localization with realistic...and a two- < " high -threshold model satisfy theoretical and pragmatic independence. Discrimination and bias measures from these two models comparing...recognition memory of patients with dementing diseases, amnesics, and normal controls. We found the two- high -threshold model to be more sensitive Lloyd
Symmetric encryption algorithms using chaotic and non-chaotic generators: A review
Radwan, Ahmed G.; AbdElHaleem, Sherif H.; Abd-El-Hafiz, Salwa K.
2015-01-01
This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold’s cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper. PMID:26966561
A traits-based approach for prioritizing species for monitoring and surrogacy selection
Pracheil, Brenda M.; McManamay, Ryan A.; Bevelhimer, Mark S.; ...
2016-11-28
The bar for justifying the use of vertebrate animals for study is being increasingly raised, thus requiring increased rigor for species selection and study design. Although we have power analyses to provide quantitative backing for the numbers of organisms used, quantitative backing for selection of study species is not frequently employed. This can be especially important when measuring the impacts of ecosystem alteration, when study species must be chosen that are both sensitive to the alteration and of sufficient abundance for study. Just as important is providing justification for designation of surrogate species for study, especially when the species ofmore » interest is rare or of conservation concern and selection of an appropriate surrogate can have legal implications. In this study, we use a combination of GIS, a fish traits database and multivariate statistical analyses to quantitatively prioritize species for study and to determine potential study surrogate species. We provide two case studies to illustrate our quantitative, traits-based approach for designating study species and surrogate species. In the first case study, we select broadly representative fish species to understand the effects of turbine passage on adult fishes based on traits that suggest sensitivity to turbine passage. In our second case study, we present a framework for selecting a surrogate species for an endangered species. Lastly, we suggest that our traits-based framework can provide quantitative backing and added justification to selection of study species while expanding the inference space of study results.« less
A traits-based approach for prioritizing species for monitoring and surrogacy selection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pracheil, Brenda M.; McManamay, Ryan A.; Bevelhimer, Mark S.
The bar for justifying the use of vertebrate animals for study is being increasingly raised, thus requiring increased rigor for species selection and study design. Although we have power analyses to provide quantitative backing for the numbers of organisms used, quantitative backing for selection of study species is not frequently employed. This can be especially important when measuring the impacts of ecosystem alteration, when study species must be chosen that are both sensitive to the alteration and of sufficient abundance for study. Just as important is providing justification for designation of surrogate species for study, especially when the species ofmore » interest is rare or of conservation concern and selection of an appropriate surrogate can have legal implications. In this study, we use a combination of GIS, a fish traits database and multivariate statistical analyses to quantitatively prioritize species for study and to determine potential study surrogate species. We provide two case studies to illustrate our quantitative, traits-based approach for designating study species and surrogate species. In the first case study, we select broadly representative fish species to understand the effects of turbine passage on adult fishes based on traits that suggest sensitivity to turbine passage. In our second case study, we present a framework for selecting a surrogate species for an endangered species. Lastly, we suggest that our traits-based framework can provide quantitative backing and added justification to selection of study species while expanding the inference space of study results.« less
Liccioli, Stefano; Catalano, Stefano; Kutz, Susan J; Lejeune, Manigandan; Verocai, Guilherme G; Duignan, Padraig J; Fuentealba, Carmen; Ruckstuhl, Kathreen E; Massolo, Alessandro
2012-07-01
Fecal analysis is commonly used to estimate prevalence and intensity of intestinal helminths in wild carnivores, but few studies have assessed the reliability of fecal flotation compared to analysis of intestinal tracts. We investigated sensitivity of the double centrifugation sugar fecal flotation and kappa agreement between fecal flotation and postmortem examination of intestines for helminths of coyotes (Canis latrans). We analyzed 57 coyote carcasses that were collected between October 2010 and March 2011 in the metropolitan area of Calgary and Edmonton, Alberta, Canada. Before analyses, intestines and feces were frozen at -80 C for 72 hr to inactivate Echinococcus eggs, protecting operators from potential exposure. Five species of helminths were found by postmortem examination, including Toxascaris leonina, Uncinaria stenocephala, Ancylostoma caninum, Taenia sp., and Echinococcus multilocularis. Sensitivity of fecal flotation was high (0.84) for detection of T. leonina but low for Taenia sp. (0.27), E. multilocularis (0.46), and U. stenocephala (0.00). Good kappa agreement between techniques was observed only for T. leonina (0.64), for which we detected also a significant correlation between adult female parasite intensity and fecal egg counts (R(s)=0.53, P=0.01). Differences in sensitivity may be related to parasite characteristics that affect recovery of eggs on flotation. Fecal parasitologic analyses are highly applicable to study the disease ecology of urban carnivores, and they often provide important information on environmental contamination and potential of zoonotic risks. However, fecal-based parasitologic surveys should first assess the sensitivity of the techniques to understand their biases and limitations.
Adkins, Daniel E.; McClay, Joseph L.; Vunck, Sarah A.; Batman, Angela M.; Vann, Robert E.; Clark, Shaunna L.; Souza, Renan P.; Crowley, James J.; Sullivan, Patrick F.; van den Oord, Edwin J.C.G.; Beardsley, Patrick M.
2014-01-01
Behavioral sensitization has been widely studied in animal models and is theorized to reflect neural modifications associated with human psychostimulant addiction. While the mesolimbic dopaminergic pathway is known to play a role, the neurochemical mechanisms underlying behavioral sensitization remain incompletely understood. In the present study, we conducted the first metabolomics analysis to globally characterize neurochemical differences associated with behavioral sensitization. Methamphetamine-induced sensitization measures were generated by statistically modeling longitudinal activity data for eight inbred strains of mice. Subsequent to behavioral testing, nontargeted liquid and gas chromatography-mass spectrometry profiling was performed on 48 brain samples, yielding 301 metabolite levels per sample after quality control. Association testing between metabolite levels and three primary dimensions of behavioral sensitization (total distance, stereotypy and margin time) showed four robust, significant associations at a stringent metabolome-wide significance threshold (false discovery rate < 0.05). Results implicated homocarnosine, a dipeptide of GABA and histidine, in total distance sensitization, GABA metabolite 4-guanidinobutanoate and pantothenate in stereotypy sensitization, and myo-inositol in margin time sensitization. Secondary analyses indicated that these associations were independent of concurrent methamphetamine levels and, with the exception of the myo-inositol association, suggest a mechanism whereby strain-based genetic variation produces specific baseline neurochemical differences that substantially influence the magnitude of MA-induced sensitization. These findings demonstrate the utility of mouse metabolomics for identifying novel biomarkers, and developing more comprehensive neurochemical models, of psychostimulant sensitization. PMID:24034544
Havas, Magda; Marrongelle, Jeffrey
2013-06-01
This is a replication of a study that we previously conducted in Colorado with 25 subjects designed to test the effect of electromagnetic radiation generated by the base station of a cordless phone on heart rate variability (HRV). In this study, we analyzed the response of 69 subjects between the ages of 26 and 80 in both Canada and the USA. Subjects were exposed to radiation for 3-min intervals generated by a 2.4-GHz cordless phone base station (3-8 μW/cm²). A few participants had a severe reaction to the radiation with an increase in heart rate and altered HRV indicative of an alarm response to stress. Based on the HRV analyses of the 69 subjects, 7% were classified as being "moderately to very" sensitive, 29% were "little to moderately" sensitive, 30% were "not to little" sensitive and 6% were "unknown". These results are not psychosomatic and are not due to electromagnetic interference. Twenty-five percent of the subjects' self-proclaimed sensitivity corresponded to that based on the HRV analysis, while 32% overestimated their sensitivity and 42% did not know whether or not they were electrically sensitive. Of the 39 participants who claimed to experience some electrical hypersensitivity, 36% claimed they also reacted to a cordless phone and experienced heart symptoms and, of these, 64% were classified as having some degree of electrohypersensitivity (EHS) based on their HRV response. Novel findings include documentation of a delayed response to radiation. Orthostatic HRV testing combined with provocation testing may provide a diagnostic tool for some sufferers of EHS when they are exposed to electromagnetic emitting devices. The protocol used underestimates reaction to electromagnetic radiation for those who have a delayed autonomic nervous system reaction and it may under diagnose those who have adrenal exhaustion as their ability to mount a response to a stressor is diminished.
2014-01-01
Background People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Methods Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. Results The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Conclusions Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA. PMID:24597710
Timmermans, Erik J; van der Pas, Suzan; Schaap, Laura A; Sánchez-Martínez, Mercedes; Zambon, Sabina; Peter, Richard; Pedersen, Nancy L; Dennison, Elaine M; Denkinger, Michael; Castell, Maria Victoria; Siviero, Paola; Herbolsheimer, Florian; Edwards, Mark H; Otero, Angel; Deeg, Dorly J H
2014-03-05
People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA.
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses
Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn
2016-01-01
Background Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. Objective The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Methods Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Results Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than others. Integration of content control improved quality of decision making (SMD 0.59 vs 0.23 for knowledge; SMD 0.39 vs 0.29 for decisional conflict). In contrast, tailoring reduced quality of decision making (SMD 0.40 vs 0.71 for knowledge; SMD 0.25 vs 0.52 for decisional conflict). Similarly, patient narratives also reduced quality of decision making (SMD 0.43 vs 0.65 for knowledge; SMD 0.17 vs 0.46 for decisional conflict). Results were varied for different types of explicit values clarification, feedback, and social support. Conclusions Integration of media rich or interactive features into computer-based decision aids can improve quality of preference-sensitive decision making. However, this is an emerging field with limited evidence to guide use. The systematic review and thematic synthesis identified features that have been integrated into available computer-based decision aids, in an effort to facilitate reporting of these features and to promote integration of such features into decision aids. The meta-analyses and associated subgroup analyses provide preliminary evidence to support integration of specific features into future decision aids. Further research can focus on clarifying independent contributions of specific features through experimental designs and refining the designs of features to improve effectiveness. PMID:26813512
NASA Astrophysics Data System (ADS)
Sharma, Dheeraj; Singh, Deepika; Pandey, Sunil; Yadav, Shivendra; Kondekar, P. N.
2017-11-01
In this work, we have done a comprehensive study between full-gate and short-gate dielectrically modulated (DM) electrically doped tunnel field-effect transistor (SGDM-EDTFET) based biosensors of equivalent dimensions. However, in both the structures, dielectric constant and charge density are considered as a sensing parameter for sensing the charged and non-charged biomolecules in the given solution. In SGDM-EDTFET architecture, the reduction in gate length results a significant improvement in the tunneling current due to occurrence of strong coupling between gate and channel region which ensures higher drain current sensitivity for detection of the biomolecules. Moreover, the sensitivity of dual metal SGDM-EDTFET is compared with the single metal SGDM-EDTFET to analyze the better sensing capability of both the devices for the biosensor application. Further, the effect of sensing parameter i.e., ON-current (ION), and ION/IOFF ratio is analysed for dual metal SGDM-EDTFET in comparison with dual metal SGDM-EDFET. From the comparison, it is found that dual metal SGDM-EDTFET based biosensor attains relatively better sensitivity and can be utilized as a suitable candidate for biosensing applications.
Yu, Cui; Yang, Cuiyun; Song, Shaoyi; Yu, Zixiang; Zhou, Xueping; Wu, Jianxiang
2018-04-04
Iris yellow spot virus (IYSV) is an Orthotospovirus that infects most Allium species. Very few approaches for specific detection of IYSV from infected plants are available to date. We report the development of a high-sensitive Luminex xMAP-based microsphere immunoassay (MIA) for specific detection of IYSV. The nucleocapsid (N) gene of IYSV was cloned and expressed in Escherichia coli to produce the His-tagged recombinant N protein. A panel of monoclonal antibodies (MAbs) against IYSV was generated by immunizing the mice with recombinant N protein. Five specific MAbs (16D9, 11C6, 7F4, 12C10, and 14H12) were identified and used for developing the Luminex xMAP-based MIA systems along with a polyclonal antibody against IYSV. Comparative analyses of their sensitivity and specificity in detecting IYSV from infected tobacco leaves identified 7F4 as the best-performed MAb in MIA. We then optimized the working conditions of Luminex xMAP-based MIA in specific detection of IYSV from infected tobacco leaves by using appropriate blocking buffer and proper concentration of biotin-labeled antibodies as well as the suitable ratio between the antibodies and the streptavidin R-phycoerythrin (SA-RPE). Under the optimized conditions the Luminex xMAP-based MIA was able to specifically detect IYSV with much higher sensitivity than conventional enzyme-linked immunosorbent assay (ELISA). Importantly, the Luminex xMAP-based MIA is time-saving and the whole procedure could be completed within 2.5 h. We generated five specific MAbs against IYSV and developed the Luminex xMAP-based MIA method for specific detection of IYSV in plants. This assay provides a sensitive, high-specific, easy to perform and likely cost-effective approach for IYSV detection from infected plants, implicating potential broad usefulness of MIA in plant virus diagnosis.
Tan, Chongqing; Peng, Liubao; Zeng, Xiaohui; Li, Jianhe; Wan, Xiaomin; Chen, Gannong; Yi, Lidan; Luo, Xia; Zhao, Ziying
2013-01-01
First-line postoperative adjuvant chemotherapies with S-1 and capecitabine and oxaliplatin (XELOX) were first recommended for resectable gastric cancer patients in the 2010 and 2011 Chinese NCCN Clinical Practice Guidelines in Oncology: Gastric Cancer; however, their economic impact in China is unknown. The aim of this study was to compare the cost-effectiveness of adjuvant chemotherapy with XELOX, with S-1 and no treatment after a gastrectomy with extended (D2) lymph-node dissection among patients with stage II-IIIB gastric cancer. A Markov model, based on data from two clinical phase III trials, was developed to analyse the cost-effectiveness of patients in the XELOX group, S-1 group and surgery only (SO) group. The costs were estimated from the perspective of Chinese healthcare system. The utilities were assumed on the basis of previously published reports. Costs, quality-adjusted life-years (QALYs) and incremental cost-effectiveness ratios (ICER) were calculated with a lifetime horizon. One-way and probabilistic sensitivity analyses were performed. For the base case, XELOX had the lowest total cost ($44,568) and cost-effectiveness ratio ($7,360/QALY). The relative scenario analyses showed that SO was dominated by XELOX and the ICERs of S-1 was $58,843/QALY compared with XELOX. The one-way sensitivity analysis showed that the most influential parameter was the utility of disease-free survival. The probabilistic sensitivity analysis predicted a 75.8% likelihood that the ICER for XELOX would be less than $13,527 compared with S-1. When ICER was more than $38,000, the likelihood of cost-effectiveness achieved by S-1 group was greater than 50%. Our results suggest that for patients in China with resectable disease, first-line adjuvant chemotherapy with XELOX after a D2 gastrectomy is a best option comparing with S-1 and SO in view of our current study. In addition, S-1 might be a better choice, especially with a higher value of willingness-to-pay threshold.
Characterizing grazing disturbance in semiarid ecosystems across broad scales, using diverse indices
Beever, E.A.; Tausch, R.J.; Brussard, P.F.
2003-01-01
Although management and conservation strategies continue to move toward broader spatial scales and consideration of many taxonomic groups simultaneously, researchers have struggled to characterize responses to disturbance at these scales. Most studies of disturbance by feral grazers investigate effects on only one or two ecosystem elements across small spatial scales, limiting their applicability to ecosystem-level management. To address this inadequacy, in 1997 and 1998 we examined disturbance created by feral horses (Equus caballus) in nine mountain ranges of the western Great Basin, USA, using plants, small mammals, ants, and soil compaction as indicators. Nine horse-occupied and 10 horse-removed sites were stratified into high- and low-elevation groups, and all sites at each elevation had similar vegetation type, aspect, slope gradient, and recent (≥15-yr) fire and livestock-grazing histories. Using reciprocal averaging and TWINSPAN analyses, we compared relationships among sites using five data sets: abiotic variables, percent cover by plant species, an index of abundance by plant species, 10 disturbance-sensitive response variables, and grass and shrub species considered “key” indicators by land managers. Although reciprocal averaging and TWINSPAN analyses of percent cover, abiotic variables, and key species suggested relationships between sites influenced largely by biogeography (i.e., mountain range), disturbance-sensitive variables clearly segregated horse-occupied and horse-removed sites. These analyses suggest that the influence of feral horses on many Great Basin ecosystem attributes is not being detected by monitoring only palatable plant species. We recommend development of an expanded monitoring strategy based not only on established vegetation measurements investigating forage consumption, but also including disturbance-sensitive variables (e.g., soil surface hardness, abundance of ant mounds) that more completely reflect the suite of effects that a large-bodied grazer may impose on mountain ecosystems, independent of vegetation differences. By providing a broader-based mechanism for detection of adverse effects, this strategy would provide management agencies with defensible data in a sociopolitical arena that has been embroiled in conflict for several decades.
Beever, Erik A.; Tausch, Robin J.; Brussard, Peter F.
2003-01-01
Although management and conservation strategies continue to move toward broader spatial scales and consideration of many taxonomic groups simultaneously, researchers have struggled to characterize responses to disturbance at these scales. Most studies of disturbance by feral grazers investigate effects on only one or two ecosystem elements across small spatial scales, limiting their applicability to ecosystem-level management. To address this inadequacy, in 1997 and 1998 we examined disturbance created by feral horses (Equus caballus) in nine mountain ranges of the western Great Basin, USA, using plants, small mammals, ants, and soil compaction as indicators. Nine horse-occupied and 10 horse-removed sites were stratified into high- and low-elevation groups, and all sites at each elevation had similar vegetation type, aspect, slope gradient, and recent (≥15-yr) fire and livestock-grazing histories. Using reciprocal averaging and TWINSPAN analyses, we compared relationships among sites using five data sets: abiotic variables, percent cover by plant species, an index of abundance by plant species, 10 disturbance-sensitive response variables, and grass and shrub species considered “key” indicators by land managers. Although reciprocal averaging and TWINSPAN analyses of percent cover, abiotic variables, and key species suggested relationships between sites influenced largely by biogeography (i.e., mountain range), disturbance-sensitive variables clearly segregated horse-occupied and horse-removed sites. These analyses suggest that the influence of feral horses on many Great Basin ecosystem attributes is not being detected by monitoring only palatable plant species. We recommend development of an expanded monitoring strategy based not only on established vegetation measurements investigating forage consumption, but also including disturbance-sensitive variables (e.g., soil surface hardness, abundance of ant mounds) that more completely reflect the suite of effects that a large-bodied grazer may impose on mountain ecosystems, independent of vegetation differences. By providing a broader-based mechanism for detection of adverse effects, this strategy would provide management agencies with defensible data in a sociopolitical arena that has been embroiled in conflict for several decades.
Oparaji, Uchenna; Sheu, Rong-Jiun; Bankhead, Mark; Austin, Jonathan; Patelli, Edoardo
2017-12-01
Artificial Neural Networks (ANNs) are commonly used in place of expensive models to reduce the computational burden required for uncertainty quantification, reliability and sensitivity analyses. ANN with selected architecture is trained with the back-propagation algorithm from few data representatives of the input/output relationship of the underlying model of interest. However, different performing ANNs might be obtained with the same training data as a result of the random initialization of the weight parameters in each of the network, leading to an uncertainty in selecting the best performing ANN. On the other hand, using cross-validation to select the best performing ANN based on the ANN with the highest R 2 value can lead to biassing in the prediction. This is as a result of the fact that the use of R 2 cannot determine if the prediction made by ANN is biased. Additionally, R 2 does not indicate if a model is adequate, as it is possible to have a low R 2 for a good model and a high R 2 for a bad model. Hence, in this paper, we propose an approach to improve the robustness of a prediction made by ANN. The approach is based on a systematic combination of identical trained ANNs, by coupling the Bayesian framework and model averaging. Additionally, the uncertainties of the robust prediction derived from the approach are quantified in terms of confidence intervals. To demonstrate the applicability of the proposed approach, two synthetic numerical examples are presented. Finally, the proposed approach is used to perform a reliability and sensitivity analyses on a process simulation model of a UK nuclear effluent treatment plant developed by National Nuclear Laboratory (NNL) and treated in this study as a black-box employing a set of training data as a test case. This model has been extensively validated against plant and experimental data and used to support the UK effluent discharge strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Boundary Layer Depth In Coastal Regions
NASA Astrophysics Data System (ADS)
Porson, A.; Schayes, G.
The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.
A conflict model for the international hazardous waste disposal dispute.
Hu, Kaixian; Hipel, Keith W; Fang, Liping
2009-12-15
A multi-stage conflict model is developed to analyze international hazardous waste disposal disputes. More specifically, the ongoing toxic waste conflicts are divided into two stages consisting of the dumping prevention and dispute resolution stages. The modeling and analyses, based on the methodology of graph model for conflict resolution (GMCR), are used in both stages in order to grasp the structure and implications of a given conflict from a strategic viewpoint. Furthermore, a specific case study is investigated for the Ivory Coast hazardous waste conflict. In addition to the stability analysis, sensitivity and attitude analyses are conducted to capture various strategic features of this type of complicated dispute.
Jussi, Liippo; Lammintausta, Kaija
2009-03-01
Contact sensitization to local anaesthetics is often from topical medicaments. Occupational sensitization to topical anaesthetics may occur in certain occupations. The aim of the study was to analyse the occurrence of contact sensitization to topical anaesthetics in general dermatology patients. Patch testing with topical anaesthetics was carried out in 620 patients. Possible sources of sensitization and the clinical histories of the patients are analysed. Positive patch test reactions to one or more topical anaesthetics were seen in 25/620 patients. Dibucaine reactions were most common (20/25), and lidocaine sensitization was seen in two patients. Six patients had reactions to ester-type and/or amide-type anaesthetics concurrently. Local preparations for perianal conditions were the most common sensitizers. One patient had developed occupational sensitization to procaine with multiple cross-reactions and with concurrent penicillin sensitization from procaine penicillin. Dibucaine-containing perianal medicaments are the major source of contact sensitization to topical anaesthetics. Although sensitization to multiple anaesthetics can be seen, cross-reactions are possible. Contact sensitization to lidocaine is not common, and possible cross-reactions should be determined when reactions to lidocaine are seen. Occupational procaine sensitization from veterinary medicaments is a risk among animal workers.
Nolan, John P.; Mandy, Francis
2008-01-01
While the term flow cytometry refers to the measurement of cells, the approach of making sensitive multiparameter optical measurements in a flowing sample stream is a very general analytical approach. The past few years have seen an explosion in the application of flow cytometry technology for molecular analysis and measurements using micro-particles as solid supports. While microsphere-based molecular analyses using flow cytometry date back three decades, the need for highly parallel quantitative molecular measurements that has arisen from various genomic and proteomic advances has driven the development in particle encoding technology to enable highly multiplexed assays. Multiplexed particle-based immunoassays are now common place, and new assays to study genes, protein function, and molecular assembly. Numerous efforts are underway to extend the multiplexing capabilities of microparticle-based assays through new approaches to particle encoding and analyte reporting. The impact of these developments will be seen in the basic research and clinical laboratories, as well as in drug development. PMID:16604537
Lee, Min-Young; Park, Sun-Kyeong; Park, Sun-Young; Byun, Ji-Hye; Lee, Sang-Min; Ko, Su-Kyoung; Lee, Eui-Kyung
2015-08-01
This study evaluated the cost-effectiveness of introducing tofacitinib, an oral Janus kinase inhibitor, to the treatment of Korean patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs. In this cost-utility analysis model, patients transitioned through treatment sequences based on Korean guidelines for RA patients with inadequate response to conventional disease-modifying antirheumatic drugs. Lifetime health-related quality of life and costs were evaluated. Characteristics of the model cohort were based on those reported by the Oral Rheumatoid Arthritis phase 3 triaL (ORAL) Standard randomized Controlled trial of tofacitinib or adalimumab versus placebo. Efficacy was assessed using American College of Rheumatology response rates, converted to the changes in Health Assessment Questionnaire-Disability Index (HAQ-DI) scores, based on tofacitinib clinical trials data. Published clinical trial data on discontinuation rates of the indicated drugs were incorporated in the model. The HAQ-DI scores were mapped onto utility values to calculate outcomes in terms of quality-adjusted life-years (QALYs); HAQ-DI-to-utility (EuroQoL 5D) mapping was based on data from 5 tofacitinib clinical trials. Costs were analyzed from a societal perspective, with values expressed in 2013 Korean won (KRW). Cost-effectiveness is presented in terms of incremental cost-effectiveness ratios (ICERs). One-way sensitivity analyses were performed to assess the robustness of the model. First-line tofacitinib used before the standard of care (base-case analysis) increased both treatment costs and QALYs gained versus the standard-of-care treatment sequence, resulting in an ICER of KRW 13,228,910 per QALY. Tofacitinib also increased costs and QALYs gained when incorporated as a second-, third-, or fourth-line therapy. The inclusion of first-line tofacitinib increased the duration of active immunomodulatory therapy from 9.4 to 13.2 years. Tofacitinib-associated increases in costs were attributable to the increased lifetime drug costs. In sensitivity analyses, variations in input parameters and assumptions yielded ICERs in the range of KRW 6,995,719 per QALY to KRW 37,450,109 per QALY. From a societal perspective, the inclusion of tofacitinib as a treatment strategy for moderate to severe RA is cost-effective; this conclusion was considered robust based on multiple sensitivity analyses. The study was limited by the lack of clinical data on follow-up therapy after tofacitinib administration and a lack of long-term data on discontinuation of drug use. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
A nanocluster-based fluorescent sensor for sensitive hemoglobin detection.
Yang, Dongqin; Meng, Huijie; Tu, Yifeng; Yan, Jilin
2017-08-01
In this report, a fluorescence sensor for sensitive detection of hemoglobin was developed. Gold nanoclusters were first synthesized with bovine serum albumin. It was found that both hydrogen peroxide and hemoglobin could weakly quench the fluorescence from the gold nanoclusters, but when these two were applied onto the nanolcusters simultaneously, a much improved quenching was resulted. This enhancing effect was proved to come from the catalytic generation of hydroxyl radical by hemoglobin. Under an optimized condition, the quenching linearly related to the concentration of hemoglobin in the range of 1-250nM, and a limit of detection as low as 0.36nM could be obtained. This provided a sensitive means for the quantification of Hb. The sensor was then successfully applied for blood analyses with simple sample pretreatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Hsu, Chung-Jen; Jones, Elizabeth G
2017-02-01
This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.
Balancing data sharing requirements for analyses with data sensitivity
Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.
2007-01-01
Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.
Economic evaluation of pneumococcal conjugate vaccination in The Gambia.
Kim, Sun-Young; Lee, Gene; Goldie, Sue J
2010-09-03
Gambia is the second GAVI support-eligible country to introduce the 7-valent pneumococcal conjugate vaccine (PCV7), but a country-specific cost-effectiveness analysis of the vaccine is not available. Our objective was to assess the potential impact of PCVs of different valences in The Gambia. We synthesized the best available epidemiological and cost data using a state-transition model to simulate the natural histories of various pneumococcal diseases. For the base-case, we estimated incremental cost (in 2005 US dollars) per disability-adjusted life year (DALY) averted under routine vaccination using PCV9 compared to no vaccination. We extended the base-case results for PCV9 to estimate the cost-effectiveness of PCV7, PCV10, and PCV13, each compared to no vaccination. To explore parameter uncertainty, we performed both deterministic and probabilistic sensitivity analyses. We also explored the impact of vaccine efficacy waning, herd immunity, and serotype replacement, as a part of the uncertainty analyses, by assuming alternative scenarios and extrapolating empirical results from different settings. Assuming 90% coverage, a program using a 9-valent PCV (PCV9) would prevent approximately 630 hospitalizations, 40 deaths, and 1000 DALYs, over the first 5 years of life of a birth cohort. Under base-case assumptions ($3.5 per vaccine), compared to no intervention, a PCV9 vaccination program would cost $670 per DALY averted in The Gambia. The corresponding values for PCV7, PCV10, and PCV13 were $910, $670, and $570 per DALY averted, respectively. Sensitivity analyses that explored the implications of the uncertain key parameters showed that model outcomes were most sensitive to vaccine price per dose, discount rate, case-fatality rate of primary endpoint pneumonia, and vaccine efficacy against primary endpoint pneumonia. Based on the information available now, infant PCV vaccination would be expected to reduce pneumococcal diseases caused by S. pneumoniae in The Gambia. Assuming a cost-effectiveness threshold of three times GDP per capita, all PCVs examined would be cost-effective at the tentative Advance Market Commitment (AMC) price of $3.5 per dose. Because the cost-effectiveness of a PCV program could be affected by potential serotype replacement or herd immunity effects that may not be known until after a large scale introduction, type-specific surveillance and iterative evaluation will be critical.
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Childhood Leukemia and 50 Hz Magnetic Fields: Findings from the Italian SETIL Case-Control Study
Salvan, Alberto; Ranucci, Alessandra; Lagorio, Susanna; Magnani, Corrado
2015-01-01
We report on an Italian case-control study on childhood leukemia and exposure to extremely low frequency magnetic fields (ELF-MF). Eligible for inclusion were 745 leukemia cases, aged 0–10 years at diagnosis in 1998–2001, and 1475 sex- and age-matched population controls. Parents of 683 cases and 1044 controls (92% vs. 71%) were interviewed. ELF-MF measurements (24–48 h), in the child’s bedroom of the dwelling inhabited one year before diagnosis, were available for 412 cases and 587 controls included in the main conditional regression analyses. The magnetic field induction was 0.04 μT on average (geometric mean), with 0.6% of cases and 1.6% of controls exposed to >0.3 μT. The impact of changes in the statistical model, exposure metric, and data-set restriction criteria was explored via sensitivity analyses. No exposure-disease association was observed in analyses based on continuous exposure, while analyses based on categorical variables were characterized by incoherent exposure-outcome relationships. In conclusion, our results may be affected by several sources of bias and they are noninformative at exposure levels >0.3 μT. Nonetheless, the study may contribute to future meta- or pooled analyses. Furthermore, exposure levels among population controls are useful to estimate attributable risk. PMID:25689995
Uncertainty Quantification and Sensitivity Analysis in the CICE v5.1 Sea Ice Model
NASA Astrophysics Data System (ADS)
Urrego-Blanco, J. R.; Urban, N. M.
2015-12-01
Changes in the high latitude climate system have the potential to affect global climate through feedbacks with the atmosphere and connections with mid latitudes. Sea ice and climate models used to understand these changes have uncertainties that need to be characterized and quantified. In this work we characterize parametric uncertainty in Los Alamos Sea Ice model (CICE) and quantify the sensitivity of sea ice area, extent and volume with respect to uncertainty in about 40 individual model parameters. Unlike common sensitivity analyses conducted in previous studies where parameters are varied one-at-a-time, this study uses a global variance-based approach in which Sobol sequences are used to efficiently sample the full 40-dimensional parameter space. This approach requires a very large number of model evaluations, which are expensive to run. A more computationally efficient approach is implemented by training and cross-validating a surrogate (emulator) of the sea ice model with model output from 400 model runs. The emulator is used to make predictions of sea ice extent, area, and volume at several model configurations, which are then used to compute the Sobol sensitivity indices of the 40 parameters. A ranking based on the sensitivity indices indicates that model output is most sensitive to snow parameters such as conductivity and grain size, and the drainage of melt ponds. The main effects and interactions among the most influential parameters are also estimated by a non-parametric regression technique based on generalized additive models. It is recommended research to be prioritized towards more accurately determining these most influential parameters values by observational studies or by improving existing parameterizations in the sea ice model.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
Hospital Standardized Mortality Ratios: Sensitivity Analyses on the Impact of Coding
Bottle, Alex; Jarman, Brian; Aylin, Paul
2011-01-01
Introduction Hospital standardized mortality ratios (HSMRs) are derived from administrative databases and cover 80 percent of in-hospital deaths with adjustment for available case mix variables. They have been criticized for being sensitive to issues such as clinical coding but on the basis of limited quantitative evidence. Methods In a set of sensitivity analyses, we compared regular HSMRs with HSMRs resulting from a variety of changes, such as a patient-based measure, not adjusting for comorbidity, not adjusting for palliative care, excluding unplanned zero-day stays ending in live discharge, and using more or fewer diagnoses. Results Overall, regular and variant HSMRs were highly correlated (ρ > 0.8), but differences of up to 10 points were common. Two hospitals were particularly affected when palliative care was excluded from the risk models. Excluding unplanned stays ending in same-day live discharge had the least impact despite their high frequency. The largest impacts were seen when capturing postdischarge deaths and using just five high-mortality diagnosis groups. Conclusions HSMRs in most hospitals changed by only small amounts from the various adjustment methods tried here, though small-to-medium changes were not uncommon. However, the position relative to funnel plot control limits could move in a significant minority even with modest changes in the HSMR. PMID:21790587
NASA Technical Reports Server (NTRS)
Adams, J. J.
1980-01-01
A study of the use of conventional general aviation instruments by general aviation pilots in a six degree of freedom, fixed base simulator was conducted. The tasks performed were tracking a VOR radial and making an ILS approach to landing. A special feature of the tests was that the sensitivity of the displacement indicating instruments (the RMI, CDI, and HSI) was kept constant at values corresponding to 5 n. mi. and 1.25 n. mi. from the station. Both statistical and pilot model analyses of the data were made. The results show that performance in path following improved with increases in display sensitivity up to the highest sensitivity tested. At this maximum test sensitivity, which corresponds to the sensitivity existing at 1.25 n. mi. for the ILS glide slope transmitter, tracking accuracy was no better than it was at 5 n. mi. from the station and the pilot aircraft system exhibited a marked reduction in damping. In some cases, a pilot induced, long period unstable oscillation occurred.
Kidd, Chloe; Loxton, Natalie J
2018-05-01
The current study aimed to identify how underlying individual differences increases vulnerability to television food advertising. In particular, this study examined how reward sensitivity, a biologically-based predisposition to approach rewards (such as appetitive foods) in the environment, influenced participants' vulnerability to television food advertising and subsequent food consumption. Ninety-eight participants were randomly assigned to a cue condition (food cues versus non-food cues) and then viewed a 30 min documentary interrupted by advertising featuring a mix of food and neutral advertising (food cue condition) or only neutral advertising (non-food cue condition). Participants' reward sensitivity, approach motivation measured as urge to eat, and food consumption were recorded. Moderated mediation regression analyses revealed the positive association between reward sensitivity and food consumption was mediated by an increase in urge to eat, but only when participants were exposed to food advertising. These findings suggest heightened reward sensitivity, exposure to appetitive food cues, and approach motivation are key interacting mechanisms that may lead to maladaptive eating behaviours. Copyright © 2018 Elsevier Inc. All rights reserved.
Heidt, Sebastiaan; Haasnoot, Geert W; Claas, Frans H J
2018-05-24
Highly sensitized patients awaiting a renal transplant have a low chance of receiving an organ offer. Defining acceptable antigens and using this information for allocation purposes can vastly enhance transplantation of this subgroup of patients, which is the essence of the Eurotransplant Acceptable Mismatch program. Acceptable antigens can be determined by extensive laboratory testing, as well as on basis of human leukocyte antigen (HLA) epitope analyses. Within the Acceptable Mismatch program, there is no effect of HLA mismatches on long-term graft survival. Furthermore, patients transplanted through the Acceptable Mismatch program have similar long-term graft survival to nonsensitized patients transplanted through regular allocation. Although HLA epitope analysis is already being used for defining acceptable HLA antigens for highly sensitized patients in the Acceptable Mismatch program, increasing knowledge on HLA antibody - epitope interactions will pave the way toward the definition of acceptable epitopes for highly sensitized patients in the future. Allocation based on acceptable antigens can facilitate transplantation of highly sensitized patients with excellent long-term graft survival.
NASA Astrophysics Data System (ADS)
Zboril, Ondrej; Nedoma, Jan; Cubik, Jakub; Novak, Martin; Bednarek, Lukas; Fajkus, Marcel; Vasinek, Vladimir
2016-04-01
Interferometric sensors are very accurate and sensitive sensors that due to the extreme sensitivity allow sensing vibration and acoustic signals. This paper describes a new method of implementation of Mach-Zehnder interferometer for sensing of vibrations caused by touching on the window panes. Window panes are part of plastic windows, in which the reference arm of the interferometer is mounted and isolated inside the frame, a measuring arm of the interferometer is fixed to the window pane and it is mounted under the cover of the window frame. It prevents visibility of the optical fiber and this arrangement is the basis for the safety system. For the construction of the vibration sensor standard elements of communication networks are used - optical fiber according to G.652D and 1x2 splitters with dividing ratio 1:1. Interferometer operated at a wavelength of 1550 nm. The paper analyses the sensitivity of the window in a 12x12 measuring points matrix, there is specified sensitivity distribution of the window pane.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2015-02-25
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index on 2 October 2014. We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. Based on the combined Cochrane Hepato-Biliary Group score, we classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike
2010-01-01
Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC(10)) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate "pattern-only" PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress.
Hanisch, Karen; Küster, Eberhard; Altenburger, Rolf; Gündel, Ulrike
2010-01-01
Studies using embryos of the zebrafish Danio rerio (DarT) instead of adult fish for characterising the (eco-) toxic potential of chemicals have been proposed as animal replacing methods. Effect analysis at the molecular level might enhance sensitivity, specificity, and predictive value of the embryonal studies. The present paper aimed to test the potential of toxicoproteomics with zebrafish eleutheroembryos for sensitive and specific toxicity assessment. 2-DE-based toxicoproteomics was performed applying low-dose (EC10) exposure for 48 h with three-model substances Rotenone, 4,6-dinitro-o-cresol (DNOC) and Diclofenac. By multivariate “pattern-only” PCA and univariate statistical analyses, alterations in the embryonal proteome were detectable in nonetheless visibly intact organisms and treatment with the three substances was distinguishable at the molecular level. Toxicoproteomics enabled the enhancement of sensitivity and specificity of the embryonal toxicity assay and bear the potency to identify protein markers serving as general stress markers and early diagnosis of toxic stress. PMID:22084678
A Novel Membrane-Based Anti-Diabetic Action of Atorvastatin
Horvath, Emily M.; Tackett, Lixuan; Elmendorf, Jeffrey S.
2008-01-01
We recently found that chromium picolinate (CrPic), a nutritional supplement thought to improve insulin sensitivity in individuals with impaired glucose tolerance, enhances insulin action by lowering plasma membrane (PM) cholesterol. Recent in vivo studies suggest that cholesterol-lowering statin drugs benefit insulin sensitivity in insulin-resistant patients, yet a mechanism is unknown. We report here that atorvastatin (ATV) diminished PM cholesterol by 22% (P<0.05) in 3T3-L1 adipocytes. As documented for CrPic, this small reduction in PM cholesterol enhanced insulin action. Replenishment of cholesterol mitigated the positive effects of ATV on insulin sensitivity. Co-treatment with CrPic and ATV did not amplify the extent of PM cholesterol loss or insulin sensitivity gain. In addition, analyses of insulin signal transduction suggest a non-signaling basis of both therapies. Our data reveal an unappreciated beneficial non-hepatic effect of statin action and highlight a novel mechanistic similarity between two recently recognized therapies of impaired glucose tolerance. PMID:18514061
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chęcińska, Agata; Heaney, Libby; Pollock, Felix A.
Motivated by a proposed olfactory mechanism based on a vibrationally activated molecular switch, we study electron transport within a donor-acceptor pair that is coupled to a vibrational mode and embedded in a surrounding environment. We derive a polaron master equation with which we study the dynamics of both the electronic and vibrational degrees of freedom beyond previously employed semiclassical (Marcus-Jortner) rate analyses. We show (i) that in the absence of explicit dissipation of the vibrational mode, the semiclassical approach is generally unable to capture the dynamics predicted by our master equation due to both its assumption of one-way (exponential) electronmore » transfer from donor to acceptor and its neglect of the spectral details of the environment; (ii) that by additionally allowing strong dissipation to act on the odorant vibrational mode, we can recover exponential electron transfer, though typically at a rate that differs from that given by the Marcus-Jortner expression; (iii) that the ability of the molecular switch to discriminate between the presence and absence of the odorant, and its sensitivity to the odorant vibrational frequency, is enhanced significantly in this strong dissipation regime, when compared to the case without mode dissipation; and (iv) that details of the environment absent from previous Marcus-Jortner analyses can also dramatically alter the sensitivity of the molecular switch, in particular, allowing its frequency resolution to be improved. Our results thus demonstrate the constructive role dissipation can play in facilitating sensitive and selective operation in molecular switch devices, as well as the inadequacy of semiclassical rate equations in analysing such behaviour over a wide range of parameters.« less
Effectiveness of regional DTI measures in distinguishing Alzheimer's disease, MCI, and normal aging☆
Nir, Talia M.; Jahanshad, Neda; Villalon-Reina, Julio E.; Toga, Arthur W.; Jack, Clifford R.; Weiner, Michael W.; Thompson, Paul M.
2013-01-01
The Alzheimer's Disease Neuroimaging Initiative (ADNI) recently added diffusion tensor imaging (DTI), among several other new imaging modalities, in an effort to identify sensitive biomarkers of Alzheimer's disease (AD). While anatomical MRI is the main structural neuroimaging method used in most AD studies and clinical trials, DTI is sensitive to microscopic white matter (WM) changes not detectable with standard MRI, offering additional markers of neurodegeneration. Prior DTI studies of AD report lower fractional anisotropy (FA), and increased mean, axial, and radial diffusivity (MD, AxD, RD) throughout WM. Here we assessed which DTI measures may best identify differences among AD, mild cognitive impairment (MCI), and cognitively healthy elderly control (NC) groups, in region of interest (ROI) and voxel-based analyses of 155 ADNI participants (mean age: 73.5 ± 7.4; 90 M/65 F; 44 NC, 88 MCI, 23 AD). Both VBA and ROI analyses revealed widespread group differences in FA and all diffusivity measures. DTI maps were strongly correlated with widely-used clinical ratings (MMSE, CDR-sob, and ADAS-cog). When effect sizes were ranked, FA analyses were least sensitive for picking up group differences. Diffusivity measures could detect more subtle MCI differences, where FA could not. ROIs showing strongest group differentiation (lowest p-values) included tracts that pass through the temporal lobe, and posterior brain regions. The left hippocampal component of the cingulum showed consistently high effect sizes for distinguishing groups, across all diffusivity and anisotropy measures, and in correlations with cognitive scores. PMID:24179862
Herpes zoster vaccine: A health economic evaluation for Switzerland.
Blank, Patricia R; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D; Schwenkglenks, Matthias
2017-07-03
Herpes zoster (HZ) or "shingles" results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65-79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65-79 y.
Herpes zoster vaccine: A health economic evaluation for Switzerland
Blank, Patricia R.; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D.; Schwenkglenks, Matthias
2017-01-01
ABSTRACT Herpes zoster (HZ) or “shingles” results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65–79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65–79 y. PMID:28481678
Missing data handling in non-inferiority and equivalence trials: A systematic review.
Rabe, Brooke A; Day, Simon; Fiero, Mallorie H; Bell, Melanie L
2018-05-25
Non-inferiority (NI) and equivalence clinical trials test whether a new treatment is therapeutically no worse than, or equivalent to, an existing standard of care. Missing data in clinical trials have been shown to reduce statistical power and potentially bias estimates of effect size; however, in NI and equivalence trials, they present additional issues. For instance, they may decrease sensitivity to differences between treatment groups and bias toward the alternative hypothesis of NI (or equivalence). Our primary aim was to review the extent of and methods for handling missing data (model-based methods, single imputation, multiple imputation, complete case), the analysis sets used (Intention-To-Treat, Per-Protocol, or both), and whether sensitivity analyses were used to explore departures from assumptions about the missing data. We conducted a systematic review of NI and equivalence trials published between May 2015 and April 2016 by searching the PubMed database. Articles were reviewed primarily by 2 reviewers, with 6 articles reviewed by both reviewers to establish consensus. Of 109 selected articles, 93% reported some missing data in the primary outcome. Among those, 50% reported complete case analysis, and 28% reported single imputation approaches for handling missing data. Only 32% reported conducting analyses of both intention-to-treat and per-protocol populations. Only 11% conducted any sensitivity analyses to test assumptions with respect to missing data. Missing data are common in NI and equivalence trials, and they are often handled by methods which may bias estimates and lead to incorrect conclusions. Copyright © 2018 John Wiley & Sons, Ltd.
Sensitivity and Specificity of Polysomnographic Criteria for Defining Insomnia
Edinger, Jack D.; Ulmer, Christi S.; Means, Melanie K.
2013-01-01
Study Objectives: In recent years, polysomnography-based eligibility criteria have been increasingly used to identify candidates for insomnia research, and this has been particularly true of studies evaluating pharmacologic therapy for primary insomnia. However, the sensitivity and specificity of PSG for identifying individuals with insomnia is unknown, and there is no consensus on the criteria sets which should be used for participant selection. In the current study, an archival data set was used to test the sensitivity and specificity of PSG measures for identifying individuals with primary insomnia in both home and lab settings. We then evaluated the sensitivity and specificity of the eligibility criteria employed in a number of recent insomnia trials for identifying primary insomnia sufferers in our sample. Design: Archival data analysis. Settings: Study participants' homes and a clinical sleep laboratory. Participants: Adults: 76 with primary insomnia and 78 non-complaining normal sleepers. Measurements and Results: ROC and cross-tabs analyses were used to evaluate the sensitivity and specificity of PSG-derived total sleep time, latency to persistent sleep, wake after sleep onset, and sleep efficiency for discriminating adults with primary insomnia from normal sleepers. None of the individual criteria accurately discriminated PI from normal sleepers, and none of the criteria sets used in recent trials demonstrated acceptable sensitivity and specificity for identifying primary insomnia. Conclusions: The use of quantitative PSG-based selection criteria in insomnia research may exclude many who meet current diagnostic criteria for an insomnia disorder. Citation: Edinger JD; Ulmer CS; Means MK. Sensitivity and specificity of polysomnographic criteria for defining insomnia. J Clin Sleep Med 2013;9(5):481-491. PMID:23674940
Kaufmann, Liane; Huber, Stefan; Mayer, Daniel; Moeller, Korbinian; Marksteiner, Josef
2018-04-01
Adverse effects of heavy drinking on cognition have frequently been reported. In the present study, we systematically examined for the first time whether clinical neuropsychological assessments may be sensitive to alcohol abuse in elderly patients with suspected minor neurocognitive disorder. A total of 144 elderly with and without alcohol abuse (each group n=72; mean age 66.7 years) were selected from a patient pool of n=738 by applying propensity score matching (a statistical method allowing to match participants in experimental and control group by balancing various covariates to reduce selection bias). Accordingly, study groups were almost perfectly matched regarding age, education, gender, and Mini Mental State Examination score. Neuropsychological performance was measured using the CERAD (Consortium to Establish a Registry for Alzheimer's Disease). Classification analyses (i.e., decision tree and boosted trees models) were conducted to examine whether CERAD variables or total score contributed to group classification. Decision tree models disclosed that groups could be reliably classified based on the CERAD variables "Word List Discriminability" (tapping verbal recognition memory, 64% classification accuracy) and "Trail Making Test A" (measuring visuo-motor speed, 59% classification accuracy). Boosted tree analyses further indicated the sensitivity of "Word List Recall" (measuring free verbal recall) for discriminating elderly with versus without a history of alcohol abuse. This indicates that specific CERAD variables seem to be sensitive to alcohol-related cognitive dysfunctions in elderly patients with suspected minor neurocognitive disorder. (JINS, 2018, 24, 360-371).
Aeroelastic optimization methodology for viscous and turbulent flows
NASA Astrophysics Data System (ADS)
Barcelos Junior, Manuel Nascimento Dias
2007-12-01
In recent years, the development of faster computers and parallel processing allowed the application of high-fidelity analysis methods to the aeroelastic design of aircraft. However, these methods are restricted to the final design verification, mainly due to the computational cost involved in iterative design processes. Therefore, this work is concerned with the creation of a robust and efficient aeroelastic optimization methodology for inviscid, viscous and turbulent flows by using high-fidelity analysis and sensitivity analysis techniques. Most of the research in aeroelastic optimization, for practical reasons, treat the aeroelastic system as a quasi-static inviscid problem. In this work, as a first step toward the creation of a more complete aeroelastic optimization methodology for realistic problems, an analytical sensitivity computation technique was developed and tested for quasi-static aeroelastic viscous and turbulent flow configurations. Viscous and turbulent effects are included by using an averaged discretization of the Navier-Stokes equations, coupled with an eddy viscosity turbulence model. For quasi-static aeroelastic problems, the traditional staggered solution strategy has unsatisfactory performance when applied to cases where there is a strong fluid-structure coupling. Consequently, this work also proposes a solution methodology for aeroelastic and sensitivity analyses of quasi-static problems, which is based on the fixed point of an iterative nonlinear block Gauss-Seidel scheme. The methodology can also be interpreted as the solution of the Schur complement of the aeroelastic and sensitivity analyses linearized systems of equations. The methodologies developed in this work are tested and verified by using realistic aeroelastic systems.
Benedetti, Maura; Lanzoni, Ilaria; Nardi, Alessandro; d'Errico, Giuseppe; Di Carlo, Marta; Fattorini, Daniele; Nigro, Marco; Regoli, Francesco
2016-10-01
High-latitude marine ecosystems are ranked to be among the most sensitive regions to climate change since highly stenothermal and specially adapted organisms might be seriously affected by global warming and ocean acidification. The present investigation was aimed to provide new insights on the sensitivity to such environmental stressors in the key Antarctic species, Adamussium colbecki, focussing also on their synergistic effects with cadmium exposure, naturally abundant in this area for upwelling phenomena. Scallops were exposed for 2 weeks to various combinations of Cd (0 and 40 μgL-1), pH (8.05 and 7.60) and temperature (-1 and +1 °C). Beside Cd bioaccumulation, a wide panel of early warning biomarkers were analysed in digestive glands and gills including levels of metallothioneins, individual antioxidants and total oxyradical scavenging capacity, onset of oxidative cell damage like lipid peroxidation, lysosomal stability, DNA integrity and peroxisomal proliferation. Results indicated reciprocal interactions between multiple stressors and their elaboration by a quantitative hazard model based on the relevance and magnitude of effects, highlighted a different sensitivity of analysed tissues. Due to cellular adaptations to high basal Cd content, digestive gland appeared more tolerant toward other prooxidant stressors, but sensitive to variations of the metal. On the other hand, gills were more affected by various combinations of stressors occurring at higher temperature. Copyright © 2016 Elsevier Ltd. All rights reserved.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
NASA Astrophysics Data System (ADS)
Chen, Feifei; Jiang, Yi; Zhang, Liuchao; Jiang, Lan; Wang, Sumei
2018-04-01
A compact microhole-induced fiber optic inline Mach-Zehnder interferometer (MZI) is demonstrated for measurements of refractive index (RI) and magnetic field. Inline MZIs with different etched diameters, different interaction lengths and different sizes of microholes are fabricated and assessed. The optical transmission spectra of the inline MZIs immersed into a series of liquids are characterized and analysed. Experimental results show that liquid RI sensitivity as high as 539.8436 nm RIU-1 in the RI range of 1.3352-1.4113 RIU is achieved and also exhibits good linearity with a correlation coefficient >93%. An inline MZI is also fabricated to be a magnetic field sensor by using magnetic fluid material. The experimental results show that this magnetic field sensor has a high sensitivity of -275.6 pm Oe-1. The inline MZI-based fiber optic sensors possess many advantages, such as small size, simple fabrication, high sensitivity and good linearity, which has a wide application potential in chemical, biological and environmental sensing fields.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2018-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2017-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
Liu, Chun; Bridges, Melissa E; Kaundun, Shiv S; Glasgow, Les; Owen, Micheal Dk; Neve, Paul
2017-02-01
Simulation models are useful tools for predicting and comparing the risk of herbicide resistance in weed populations under different management strategies. Most existing models assume a monogenic mechanism governing herbicide resistance evolution. However, growing evidence suggests that herbicide resistance is often inherited in a polygenic or quantitative fashion. Therefore, we constructed a generalised modelling framework to simulate the evolution of quantitative herbicide resistance in summer annual weeds. Real-field management parameters based on Amaranthus tuberculatus (Moq.) Sauer (syn. rudis) control with glyphosate and mesotrione in Midwestern US maize-soybean agroecosystems demonstrated that the model can represent evolved herbicide resistance in realistic timescales. Sensitivity analyses showed that genetic and management parameters were impactful on the rate of quantitative herbicide resistance evolution, whilst biological parameters such as emergence and seed bank mortality were less important. The simulation model provides a robust and widely applicable framework for predicting the evolution of quantitative herbicide resistance in summer annual weed populations. The sensitivity analyses identified weed characteristics that would favour herbicide resistance evolution, including high annual fecundity, large resistance phenotypic variance and pre-existing herbicide resistance. Implications for herbicide resistance management and potential use of the model are discussed. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Review of Statistical Methods for Analysing Healthcare Resources and Costs
Mihaylova, Borislava; Briggs, Andrew; O'Hagan, Anthony; Thompson, Simon G
2011-01-01
We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work. Copyright © 2010 John Wiley & Sons, Ltd. PMID:20799344
Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M
2016-09-01
To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.
Blanco, S; Bécares, E
2010-03-01
Biotic indices based on macro-invertebrates and diatoms are frequently used to diagnose ecological quality in watercourses, but few published works have assessed their effectiveness as biomonitors of the concentration of micropollutants. A biological survey performed at 188 sites in the basin of the River Duero in north-western Spain. Nineteen diatom and six macro-invertebrate indices were calculated and compared with the concentrations of 37 different toxicants by means of a correlation analysis. Several chemical variables analysed correlated significantly with at least one biotic index. Sládecek's diatom index and the number of macro-invertebrate families exhibited particularly high correlation coefficients. Methods based on macro-invertebrates performed better in detecting biocides, while diatom indices showed stronger correlations with potentially toxic elements such as heavy metals. All biotic indices, and particularly diatom indices, were especially sensitive to the concentration of fats and oils and trichloroethene. Copyright 2010 Elsevier Ltd. All rights reserved.
Economic evaluation of algae biodiesel based on meta-analyses
NASA Astrophysics Data System (ADS)
Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.
2017-08-01
The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.
Optimizing Chronic Disease Management Mega-Analysis
PATH-THETA Collaboration
2013-01-01
Background As Ontario’s population ages, chronic diseases are becoming increasingly common. There is growing interest in services and care models designed to optimize the management of chronic disease. Objective To evaluate the cost-effectiveness and expected budget impact of interventions in chronic disease cohorts evaluated as part of the Optimizing Chronic Disease Management mega-analysis. Data Sources Sector-specific costs, disease incidence, and mortality were calculated for each condition using administrative databases from the Institute for Clinical Evaluative Sciences. Intervention outcomes were based on literature identified in the evidence-based analyses. Quality-of-life and disease prevalence data were obtained from the literature. Methods Analyses were restricted to interventions that showed significant benefit for resource use or mortality from the evidence-based analyses. An Ontario cohort of patients with each chronic disease was constructed and followed over 5 years (2006–2011). A phase-based approach was used to estimate costs across all sectors of the health care system. Utility values identified in the literature and effect estimates for resource use and mortality obtained from the evidence-based analyses were applied to calculate incremental costs and quality-adjusted life-years (QALYs). Given uncertainty about how many patients would benefit from each intervention, a system-wide budget impact was not determined. Instead, the difference in lifetime cost between an individual-administered intervention and no intervention was presented. Results Of 70 potential cost-effectiveness analyses, 8 met our inclusion criteria. All were found to result in QALY gains and cost savings compared with usual care. The models were robust to the majority of sensitivity analyses undertaken, but due to structural limitations and time constraints, few sensitivity analyses were conducted. Incremental cost savings per patient who received intervention ranged between $15 per diabetic patient with specialized nursing to $10,665 per patient wth congestive heart failure receiving in-home care. Limitations Evidence used to inform estimates of effect was often limited to a single trial with limited generalizability across populations, interventions, and health care systems. Because of the low clinical fidelity of health administrative data sets, intermediate clinical outcomes could not be included. Cohort costs included an average of all health care costs and were not restricted to costs associated with the disease. Intervention costs were based on resource use specified in clinical trials. Conclusions Applying estimates of effect from the evidence-based analyses to real-world resource use resulted in cost savings for all interventions. On the basis of quality-of-life data identified in the literature, all interventions were found to result in a greater QALY gain than usual care would. Implementation of all interventions could offer significant cost reductions. However, this analysis was subject to important limitations. Plain Language Summary Chronic diseases are the leading cause of death and disability in Ontario. They account for a third of direct health care costs across the province. This study aims to evaluate the cost-effectiveness of health care interventions that might improve the management of chronic diseases. The evaluated interventions led to lower costs and better quality of life than usual care. Offering these options could reduce costs per patient. However, the studies used in this analysis were of medium to very low quality, and the methods had many limitations. PMID:24228076
Systematic review with meta-analysis: the effects of rifaximin in hepatic encephalopathy.
Kimer, N; Krag, A; Møller, S; Bendtsen, F; Gluud, L L
2014-07-01
Rifaximin is recommended for prevention of hepatic encephalopathy (HE). The effects of rifaximin on overt and minimal HE are debated. To perform a systematic review and meta-analysis of randomised controlled trials (RCTs) on rifaximin for HE. We performed electronic and manual searches, gathered information from the U.S. Food and Drug Administration Home Page, and obtained unpublished information on trial design and outcome measures from authors and pharmaceutical companies. Meta-analyses were performed and results presented as risk ratios (RR) with 95% confidence intervals (CI) and the number needed to treat. Subgroup, sensitivity, regression and sequential analyses were performed to evaluate the risk of bias and sources of heterogeneity. We included 19 RCTs with 1370 patients. Outcomes were recalculated based on unpublished information of 11 trials. Overall, rifaximin had a beneficial effect on secondary prevention of HE (RR: 1.32; 95% CI 1.06-1.65), but not in a sensitivity analysis on rifaximin after TIPSS (RR: 1.27; 95% CI 1.00-1.53). Rifaximin increased the proportion of patients who recovered from HE (RR: 0.59; 95% CI: 0.46-0.76) and reduced mortality (RR: 0.68, 95% CI 0.48-0.97). The results were robust to adjustments for bias control. No small study effects were identified. The sequential analyses only confirmed the results of the analysis on HE recovery. Rifaximin has a beneficial effect on hepatic encephalopathy and may reduce mortality. The combined evidence suggests that rifaximin may be considered in the evidence-based management of hepatic encephalopathy. © 2014 John Wiley & Sons Ltd.
Alduraywish, S A; Lodge, C J; Campbell, B; Allen, K J; Erbas, B; Lowe, A J; Dharmage, S C
2016-01-01
There is growing evidence for an increase in food allergies. The question of whether early life food sensitization, a primary step in food allergies, leads to other allergic disease is a controversial but important issue. Birth cohorts are an ideal design to answer this question. We aimed to systematically investigate and meta-analyse the evidence for associations between early food sensitization and allergic disease in birth cohorts. MEDLINE and SCOPUS databases were searched for birth cohorts that have investigated the association between food sensitization in the first 2 years and subsequent wheeze/asthma, eczema and/or allergic rhinitis. We performed meta-analyses using random-effects models to obtain pooled estimates, stratified by age group. The search yielded fifteen original articles representing thirteen cohorts. Early life food sensitization was associated with an increased risk of infantile eczema, childhood wheeze/asthma, eczema and allergic rhinitis and young adult asthma. Meta-analyses demonstrated that early life food sensitization is related to an increased risk of wheeze/asthma (pooled OR 2.9; 95% CI 2.0-4.0), eczema (pooled OR 2.7; 95% CI 1.7-4.4) and allergic rhinitis (pooled OR 3.1; 95% CI 1.9-4.9) from 4 to 8 years. Food sensitization in the first 2 years of life can identify children at high risk of subsequent allergic disease who may benefit from early life preventive strategies. However, due to potential residual confounding in the majority of studies combined with lack of follow-up into adolescence and adulthood, further research is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City
NASA Astrophysics Data System (ADS)
Zavala, M.; Lei, W.; Molina, M. J.; Molina, L. T.
2009-01-01
The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.
Modeled and observed ozone sensitivity to mobile-source emissions in Mexico City
NASA Astrophysics Data System (ADS)
Zavala, M.; Lei, W. F.; Molina, M. J.; Molina, L. T.
2008-08-01
The emission characteristics of mobile sources in the Mexico City Metropolitan Area (MCMA) have changed significantly over the past few decades in response to emission control policies, advancements in vehicle technologies and improvements in fuel quality, among others. Along with these changes, concurrent non-linear changes in photochemical levels and criteria pollutants have been observed, providing a unique opportunity to understand the effects of perturbations of mobile emission levels on the photochemistry in the region using observational and modeling approaches. The observed historical trends of ozone (O3), carbon monoxide (CO) and nitrogen oxides (NOx) suggest that ozone production in the MCMA has changed from a low to a high VOC-sensitive regime over a period of 20 years. Comparison of the historical emission trends of CO, NOx and hydrocarbons derived from mobile-source emission studies in the MCMA from 1991 to 2006 with the trends of the concentrations of CO, NOx, and the CO/NOx ratio during peak traffic hours also indicates that fuel-based fleet average emission factors have significantly decreased for CO and VOCs during this period whereas NOx emission factors do not show any strong trend, effectively reducing the ambient VOC/NOx ratio. This study presents the results of model analyses on the sensitivity of the observed ozone levels to the estimated historical changes in its precursors. The model sensitivity analyses used a well-validated base case simulation of a high pollution episode in the MCMA with the mathematical Decoupled Direct Method (DDM) and the standard Brute Force Method (BFM) in the 3-D CAMx chemical transport model. The model reproduces adequately the observed historical trends and current photochemical levels. Comparison of the BFM and the DDM sensitivity techniques indicates that the model yields ozone values that increase linearly with NOx emission reductions and decrease linearly with VOC emission reductions only up to 30% from the base case. We further performed emissions perturbations from the gasoline fleet, diesel fleet, all mobile (gasoline plus diesel) and all emission sources (anthropogenic plus biogenic). The results suggest that although large ozone reductions obtained in the past were from changes in emissions from gasoline vehicles, currently significant benefits could be achieved with additional emission control policies directed to regulation of VOC emissions from diesel and area sources that are high emitters of alkenes, aromatics and aldehydes.
Attomole quantitation of protein separations with accelerator mass spectrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogel, J S; Grant, P G; Buccholz, B A
2000-12-15
Quantification of specific proteins depends on separation by chromatography or electrophoresis followed by chemical detection schemes such as staining and fluorophore adhesion. Chemical exchange of short-lived isotopes, particularly sulfur, is also prevalent despite the inconveniences of counting radioactivity. Physical methods based on isotopic and elemental analyses offer highly sensitive protein quantitation that has linear response over wide dynamic ranges and is independent of protein conformation. Accelerator mass spectrometry quantifies long-lived isotopes such as 14C to sub-attomole sensitivity. We quantified protein interactions with small molecules such as toxins, vitamins, and natural biochemicals at precisions of 1-5% . Micro-proton-induced-xray-emission quantifies elemental abundancesmore » in separated metalloprotein samples to nanogram amounts and is capable of quantifying phosphorylated loci in gels. Accelerator-based quantitation is a possible tool for quantifying the genome translation into proteome.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Nicholas R.; Powers, Jeffrey J.; Mueller, Don
In September 2016, reactor physics measurements were conducted at Research Centre Rez (RC Rez) using the FLiBe (2 7LiF + BeF 2) salt from the Molten Salt Reactor Experiment (MSRE) in the LR-0 low power nuclear reactor. These experiments were intended to inform on neutron spectral effects and nuclear data uncertainties for advanced reactor systems using FLiBe salt in a thermal neutron energy spectrum. Oak Ridge National Laboratory (ORNL), in collaboration with RC Rez, performed sensitivity/uncertainty (S/U) analyses of these experiments as part of the ongoing collaboration between the United States and the Czech Republic on civilian nuclear energy researchmore » and development. The objectives of these analyses were (1) to identify potential sources of bias in fluoride salt-cooled and salt-fueled reactor simulations resulting from cross section uncertainties, and (2) to produce the sensitivity of neutron multiplication to cross section data on an energy-dependent basis for specific nuclides. This report provides a final report on the S/U analyses of critical experiments at the LR-0 Reactor relevant to fluoride salt-cooled high temperature reactor (FHR) and liquid-fueled molten salt reactor (MSR) concepts. In the future, these S/U analyses could be used to inform the design of additional FLiBe-based experiments using the salt from MSRE. The key finding of this work is that, for both solid and liquid fueled fluoride salt reactors, radiative capture in 7Li is the most significant contributor to potential bias in neutronics calculations within the FLiBe salt.« less
Persy, B; Ieven, M
2013-01-01
Peritonitis related to peritoneal dialysis increases morbidity and mortality and is the main reason for switching to haemodialysis. In this study, we analysed the dialysate of 164 peritoneal dialysis patients that was sent to our laboratory between January 2005 and August 2009. There were 196 peritonitis episodes identified in 78 patients. For all episodes, microbial aetiologies and in-vitro antimicrobial sensitivities were determined in addition to parameters such as the leukocyte count (WBC) and the result of the Gram stain. Results in children were analysed separately because of previously described differences in aetiology. In both groups, Gram positives were most commonly isolated, followed by Gram negatives and fungi or yeasts. In children, the proportion of coagulase-negative staphylococci compared to S. aureus is smaller than in adults. Gram stain showed predominant morphotypes concordant with culture results in 28% of episodes. A significant difference in WBC count was found between culture-positive (mean=3117 x 10(9)/L) and culture-negative (mean=981 x 10(9)/L) episodes in adults (p=0.001). The WBC count in episodes caused exclusively by CNS (mean=1502x10(9)/L) was also on average significantly lower (p=0.001) compared to all culture positive episodes. Resistance to methicillin was registered in 33% of cultures, positive for staphylococci. All Gram-positives were sensitive to vancomycin. Coverage of Gram-negatives by ceftazidim and quinolones was excellent (89%). Based on local sensitivity data and known characteristics of antimicrobials, a first-line empirical combination of intraperitoneal vancomycin with orally administered ciprofloxacin seems indicated in our population. Pathogens of positive aerobic cultures were sensitive in-vitro to their combined antimicrobial effect in 90% of cases.
Shima, Jun; Ando, Akira; Takagi, Hiroshi
2008-03-01
Yeasts used in bread making are exposed to air-drying stress during dried yeast production processes. To clarify the genes required for air-drying tolerance, we performed genome-wide screening using the complete deletion strain collection of diploid Saccharomyces cerevisiae. The screening identified 278 gene deletions responsible for air-drying sensitivity. These genes were classified based on their cellular function and on the localization of their gene products. The results showed that the genes required for air-drying tolerance were frequently involved in mitochondrial functions and in connection with vacuolar H(+)-ATPase, which plays a role in vacuolar acidification. To determine the role of vacuolar acidification in air-drying stress tolerance, we monitored intracellular pH. The results showed that intracellular acidification was induced during air-drying and that this acidification was amplified in a deletion mutant of the VMA2 gene encoding a component of vacuolar H(+)-ATPase, suggesting that vacuolar H(+)-ATPase helps maintain intracellular pH homeostasis, which is affected by air-drying stress. To determine the effects of air-drying stress on mitochondria, we analysed the mitochondrial membrane potential under air-drying stress conditions using MitoTracker. The results showed that mitochondria were extremely sensitive to air-drying stress, suggesting that a mitochondrial function is required for tolerance to air-drying stress. We also analysed the correlation between oxidative-stress sensitivity and air-drying-stress sensitivity. The results suggested that oxidative stress is a critical determinant of sensitivity to air-drying stress, although ROS-scavenging systems are not necessary for air-drying stress tolerance. (c) 2008 John Wiley & Sons, Ltd.
Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John
2016-07-01
Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Bartine, D.E.; Mynatt, F.R.
1976-01-01
Two-dimensional neutron and secondary gamma-ray transport calculations and cross-section sensitivity analyses have been performed to determine the effects of varying source heights and cross sections on calculated doses. The air-over-ground calculations demonstrate the existence of an optimal height of burst for a specific ground range and indicate under what conditions they are conservative with respect to infinite air calculations. The air-over-seawater calculations showed the importance of hydrogen and chlorine in gamma production. Additional sensitivity analyses indicated the importance of water in the ground, the amount of reduction in ground thickness for calculational purposes, and the effect of the degree ofmore » Legendre angular expansion of the scattering cross-sections (P/sub l/) on the calculated dose.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks.
Navarro Jimenez, M; Le Maître, O P; Knio, O M
2016-12-28
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-23
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol’s decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes thatmore » the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. Here, a sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.« less
Global sensitivity analysis in stochastic simulators of uncertain reaction networks
NASA Astrophysics Data System (ADS)
Navarro Jimenez, M.; Le Maître, O. P.; Knio, O. M.
2016-12-01
Stochastic models of chemical systems are often subjected to uncertainties in kinetic parameters in addition to the inherent random nature of their dynamics. Uncertainty quantification in such systems is generally achieved by means of sensitivity analyses in which one characterizes the variability with the uncertain kinetic parameters of the first statistical moments of model predictions. In this work, we propose an original global sensitivity analysis method where the parametric and inherent variability sources are both treated through Sobol's decomposition of the variance into contributions from arbitrary subset of uncertain parameters and stochastic reaction channels. The conceptual development only assumes that the inherent and parametric sources are independent, and considers the Poisson processes in the random-time-change representation of the state dynamics as the fundamental objects governing the inherent stochasticity. A sampling algorithm is proposed to perform the global sensitivity analysis, and to estimate the partial variances and sensitivity indices characterizing the importance of the various sources of variability and their interactions. The birth-death and Schlögl models are used to illustrate both the implementation of the algorithm and the richness of the proposed analysis method. The output of the proposed sensitivity analysis is also contrasted with a local derivative-based sensitivity analysis method classically used for this type of systems.
Valverde, Estefanía J; Cano, Irene; Castro, Dolores; Paley, Richard K; Borrego, Juan J
2017-03-01
Lymphocystis disease virus (LCDV) infections have been described in gilthead seabream (Sparus aurata L.) and Senegalese sole (Solea senegalensis, Kaup), two of the most important marine fish species in the Mediterranean aquaculture. In this study, a rapid, specific, and sensitive detection method for LCDV genotype VII based on loop-mediated isothermal amplification (LAMP) was developed. The LAMP assay, performed using an apparatus with real-time amplification monitoring, was able to specifically detect LCDV genotype VII from clinically positive samples in less than 12 min. In addition, the assay allowed the detection of LCDV in all asymptomatic carrier fish analysed, identified by qPCR, showing an analytical sensitivity of ten copies of viral DNA per reaction. The LCDV LAMP assay has proven to be a promising diagnostic method that can be used easily in fish farms to detect the presence and spread of this iridovirus.
Use of multi-criteria decision analysis to identify potentially dangerous glacial lakes.
Kougkoulos, Ioannis; Cook, Simon J; Jomelli, Vincent; Clarke, Leon; Symeonakis, Elias; Dortch, Jason M; Edwards, Laura A; Merad, Myriam
2018-04-15
Glacial Lake Outburst Floods (GLOFs) represent a significant threat in deglaciating environments, necessitating the development of GLOF hazard and risk assessment procedures. Here, we outline a Multi-Criteria Decision Analysis (MCDA) approach that can be used to rapidly identify potentially dangerous lakes in regions without existing tailored GLOF risk assessments, where a range of glacial lake types exist, and where field data are sparse or non-existent. Our MCDA model (1) is desk-based and uses freely and widely available data inputs and software, and (2) allows the relative risk posed by a range of glacial lake types to be assessed simultaneously within any region. A review of the factors that influence GLOF risk, combined with the strict rules of criteria selection inherent to MCDA, has allowed us to identify 13 exhaustive, non-redundant, and consistent risk criteria. We use our MCDA model to assess the risk of 16 extant glacial lakes and 6 lakes that have already generated GLOFs, and found that our results agree well with previous studies. For the first time in GLOF risk assessment, we employed sensitivity analyses to test the strength of our model results and assumptions, and to identify lakes that are sensitive to the criteria and risk thresholds used. A key benefit of the MCDA method is that sensitivity analyses are readily undertaken. Overall, these sensitivity analyses lend support to our model, although we suggest that further work is required to determine the relative importance of assessment criteria, and the thresholds that determine the level of risk for each criterion. As a case study, the tested method was then applied to 25 potentially dangerous lakes in the Bolivian Andes, where GLOF risk is poorly understood; 3 lakes are found to pose 'medium' or 'high' risk, and require further detailed investigation. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhou, Jing; Zhao, Rongce; Wen, Feng; Zhang, Pengfei; Tang, Ruilei; Du, Zedong; He, Xiaofeng; Zhang, Jian; Li, Qiu
2015-04-01
Gemcitabine (GEM) alone, S-1 alone and gemcitabine plus S-1 (GS) have shown a marginal clinical benefit for the treatment of advanced pancreatic cancer. However, there is no clearly defined optimal cost-effectiveness treatment. The objective of this study was to assess the cost-effectiveness of GEM alone, S-1 alone and GS for the treatment of advanced pancreatic cancer based on GEST study for public payers. A decision model compared GEM alone, S-1 alone and GS. Primary base case data were identified using the GEST study and the literatures. Costs were estimated from West China Hospital, Sichuan University, China, and incremental cost-effectiveness ratios (ICERs) were calculated. Survival benefits were reported in quality-adjusted life-months (QALMs). Sensitive analyses were performed by varying potentially modifiable parameters of the model. The base case analysis showed that the GEM cost $21,912 and yielded survival of 6.93 QALMs, S-1 cost $19,371 and yielded survival of 7.90 QALMs and GS cost $22,943 and yielded survival of 7.46 QALMs in the entire treatment. The one-way sensitivity analyses showed that the ICER of S-1 was driven mostly by the S-1 group utility score of stable state compared with GEM, and the GEM group utility score of progressed state played a key role on the ICER of GS compared with GEM. S-1 represents an attractive cost-effective treatment for advanced pancreatic cancer, given the favorable cost per QALM and improvement in clinical efficacy, especially the limited available treatment options.
Inhomogeneous Forcing and Transient Climate Sensitivity
NASA Technical Reports Server (NTRS)
Shindell, Drew T.
2014-01-01
Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO2 in present models and assessments (less than 1.3 C) is very unlikely.
Search for Bs0 oscillations using inclusive lepton events
NASA Astrophysics Data System (ADS)
ALEPH Collaboration; Barate, R.; et al.
1999-03-01
A search for Bs0 oscillations is performed using a sample of semileptonic b-hadron decays collected by the ALEPH experiment during 1991-95. Compared to previous inclusive lepton analyses, the proper time resolution and b-flavour mistag rate are significantly improved. Additional sensitivity to Bs0 mixing is obtained by identifying subsamples of events having a Bs0 purity which is higher than the average for the whole data sample. Unbinned maximum likelihood amplitude fits are performed to derive a lower limit of Δ m s > 9.5 ps-1 at the 95% confidence limit (95% CL). Combining with the ALEPH Ds--based analyses yields Δ m s > 9.6 ps-1 at 95% CL.
Ross, David E; Ochs, Alfred L; Seabaugh, Jan M; Shrader, Carole R
2013-01-01
NeuroQuant® is a recently developed, FDA-approved software program for measuring brain MRI volume in clinical settings. The purpose of this study was to compare NeuroQuant with the radiologist's traditional approach, based on visual inspection, in 20 outpatients with mild or moderate traumatic brain injury (TBI). Each MRI was analyzed with NeuroQuant, and the resulting volumetric analyses were compared with the attending radiologist's interpretation. The radiologist's traditional approach found atrophy in 10.0% of patients; NeuroQuant found atrophy in 50.0% of patients. NeuroQuant was more sensitive for detecting brain atrophy than the traditional radiologist's approach.
Analysis of radon and thoron progeny measurements based on air filtration.
Stajic, J M; Nikezic, D
2015-02-01
Measuring of radon and thoron progeny concentrations in air, based on air filtration, was analysed in order to assess the reliability of the method. Changes of radon and thoron progeny activities on the filter during and after air sampling were investigated. Simulation experiments were performed involving realistic measuring parameters. The sensitivity of results (radon and thoron concentrations in air) to the variations of alpha counting in three and five intervals was studied. The concentration of (218)Po showed up to be the most sensitive to these changes, as was expected because of its short half-life. The well-known method for measuring of progeny concentrations based on air filtration is rather unreliable and obtaining unrealistic or incorrect results appears to be quite possible. A simple method for quick estimation of radon potential alpha energy concentration (PAEC), based on measurements of alpha activity in a saturation regime, was proposed. Thoron PAEC can be determined from the saturation activity on the filter, through beta or alpha measurements. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Fu, W; Badri, P; Bow, DAJ; Fischer, V
2017-01-01
Dasabuvir, a nonnucleoside NS5B polymerase inhibitor, is a sensitive substrate of cytochrome P450 (CYP) 2C8 with a potential for drug–drug interaction (DDI) with clopidogrel. A physiologically based pharmacokinetic (PBPK) model was developed for dasabuvir to evaluate the DDI potential with clopidogrel, the acyl‐β‐D glucuronide metabolite of which has been reported as a strong mechanism‐based inhibitor of CYP2C8 based on an interaction with repaglinide. In addition, the PBPK model for clopidogrel and its metabolite were updated with additional in vitro data. Sensitivity analyses using these PBPK models suggested that CYP2C8 inhibition by clopidogrel acyl‐β‐D glucuronide may not be as potent as previously suggested. The dasabuvir and updated clopidogrel PBPK models predict a moderate increase of 1.5–1.9‐fold for Cmax and 1.9–2.8‐fold for AUC of dasabuvir when coadministered with clopidogrel. While the PBPK results suggest there is a potential for DDI between dasabuvir and clopidogrel, the magnitude is not expected to be clinically relevant. PMID:28411400
Morriss, Richard; Glazebrook, Cris
2014-01-01
Background Depression and anxiety are common mental health difficulties experienced by university students and can impair academic and social functioning. Students are limited in seeking help from professionals. As university students are highly connected to digital technologies, Web-based and computer-delivered interventions could be used to improve students’ mental health. The effectiveness of these intervention types requires investigation to identify whether these are viable prevention strategies for university students. Objective The intent of the study was to systematically review and analyze trials of Web-based and computer-delivered interventions to improve depression, anxiety, psychological distress, and stress in university students. Methods Several databases were searched using keywords relating to higher education students, mental health, and eHealth interventions. The eligibility criteria for studies included in the review were: (1) the study aimed to improve symptoms relating to depression, anxiety, psychological distress, and stress, (2) the study involved computer-delivered or Web-based interventions accessed via computer, laptop, or tablet, (3) the study was a randomized controlled trial, and (4) the study was trialed on higher education students. Trials were reviewed and outcome data analyzed through random effects meta-analyses for each outcome and each type of trial arm comparison. Cochrane Collaboration risk of bias tool was used to assess study quality. Results A total of 17 trials were identified, in which seven were the same three interventions on separate samples; 14 reported sufficient information for meta-analysis. The majority (n=13) were website-delivered and nine interventions were based on cognitive behavioral therapy (CBT). A total of 1795 participants were randomized and 1480 analyzed. Risk of bias was considered moderate, as many publications did not sufficiently report their methods and seven explicitly conducted completers’ analyses. In comparison to the inactive control, sensitivity meta-analyses supported intervention in improving anxiety (pooled standardized mean difference [SMD] −0.56; 95% CI −0.77 to −0.35, P<.001), depression (pooled SMD −0.43; 95% CI −0.63 to −0.22, P<.001), and stress (pooled SMD −0.73; 95% CI −1.27 to −0.19, P=.008). In comparison to active controls, sensitivity analyses did not support either condition for anxiety (pooled SMD −0.18; 95% CI −0.98 to 0.62, P=.66) or depression (pooled SMD −0.28; 95% CI −0.75 to −0.20, P=.25). In contrast to a comparison intervention, neither condition was supported in sensitivity analyses for anxiety (pooled SMD −0.10; 95% CI −0.39 to 0.18, P=.48) or depression (pooled SMD −0.33; 95% CI −0.43 to 1.09, P=.40). Conclusions The findings suggest Web-based and computer-delivered interventions can be effective in improving students’ depression, anxiety, and stress outcomes when compared to inactive controls, but some caution is needed when compared to other trial arms and methodological issues were noticeable. Interventions need to be trialed on more heterogeneous student samples and would benefit from user evaluation. Future trials should address methodological considerations to improve reporting of trial quality and address post-intervention skewed data. PMID:24836465
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Ringuet, Stephanie; Sassano, Lara; Johnson, Zackary I
2011-02-01
A sensitive, accurate and rapid analysis of major nutrients in aquatic systems is essential for monitoring and maintaining healthy aquatic environments. In particular, monitoring ammonium (NH(4)(+)) concentrations is necessary for maintenance of many fish stocks, while accurate monitoring and regulation of ammonium, orthophosphate (PO(4)(3-)), silicate (Si(OH)(4)) and nitrate (NO(3)(-)) concentrations are required for regulating algae production. Monitoring of wastewater streams is also required for many aquaculture, municipal and industrial wastewater facilities to comply with local, state or federal water quality effluent regulations. Traditional methods for quantifying these nutrient concentrations often require laborious techniques or expensive specialized equipment making these analyses difficult. Here we present four alternative microcolorimetric assays that are based on a standard 96-well microplate format and microplate reader that simplify the quantification of each of these nutrients. Each method uses small sample volumes (200 µL), has a detection limit ≤ 1 µM in freshwater and ≤ 2 µM in saltwater, precision of at least 8% and compares favorably with standard analytical procedures. Routine use of these techniques in the laboratory and at an aquaculture facility to monitor nutrient concentrations associated with microalgae growth demonstrates that they are rapid, accurate and highly reproducible among different users. These techniques offer an alternative to standard nutrient analyses and because they are based on the standard 96-well format, they significantly decrease the cost and time of processing while maintaining high precision and sensitivity.
NASA Astrophysics Data System (ADS)
Bertke, Maik; Hamdana, Gerry; Wu, Wenze; Wasisto, Hutomo Suryo; Peiner, Erwin
2017-06-01
The asymmetric resonance responses of a thermally actuated silicon microcantilever of a portable, cantilever-based nanoparticle detector (Cantor) is analysed. For airborne nanoparticle concentration measurements, the cantilever is excited in its first in-plane bending mode by an integrated p-type heating actuator. The mass-sensitive nanoparticle (NP) detection is based on the resonance frequency (f0) shifting due to the deposition of NPs. A homemade phase-locked loop (PLL) circuit is developed for tracking of f0. For deflection sensing the cantilever contains an integrated piezo-resistive Wheatstone bridge (WB). A new fitting function based on the Fano resonance is proposed for analysing the asymmetric resonance curves including a method for calculating the quality factor Q from the fitting parameters. To obtain a better understanding, we introduce an electrical equivalent circuit diagram (ECD) comprising a series resonant circuit (SRC) for the cantilever resonator and voltage sources for the parasitics, which enables us to simulate the asymmetric resonance response and discuss the possible causes. Furthermore, we compare the frequency response of the on-chip thermal excitation with an external excitation using an in-plane piezo actuator revealing parasitic heating of the WB as the origin of the asymmetry. Moreover, we are able to model the phase component of the sensor output using the ECD. Knowing and understanding the phase response is crucial to the design of the PLL and thus the next generation of Cantor.
Bretzel, Reinhard G; Dippel, Franz-Werner; Linn, Thomas; Neilson, Aileen Rae
2009-06-01
A cost analysis of once-daily insulin glargine versus three-times daily insulin lispro in combination with oral antidiabetic drugs (OADs) for insulin-naive type 2 diabetes patients in Germany based on the APOLLO trial (A Parallel design comparing an Oral antidiabetic drug combination therapy with either Lantus once daily or Lispro at mealtime in type 2 diabetes patients failing Oral treatment). Annual direct treatment costs were estimated from the perspective of the German statutory health insurance (SHI). Costs accounted for included insulin medication, disposable pens and consumable items (needles, blood glucose test strips and lancets). Sensitivity analyses (on resource use and unit costs) were performed to reflect current German practice. Average treatment costs per patient per year in the base case were 1,073 euro for glargine and 1,794 euro for lispro. Insulin costs represented 65% vs. 37% of total costs respectively. Acquisition costs of glargine were offset by the lower costs of consumable items (380 euro vs. 1,139 euro). Sensitivity analyses confirmed the robustness of the results in favour of glargine. All scenarios yielded cost savings in total treatment costs ranging from 84 euro to 727 euro. Combination therapy of once-daily insulin glargine versus three-times daily insulin lispro both with OADs, in the management of insulin-dependent type 2 diabetes offers the potential for substantial cost savings from the German SHI perspective.
Trotta, Francesco; Cascini, Silvia; Agabiti, Nera; Kohn, Anna; Gasbarrini, Antonio; Davoli, Marina; Addis, Antonio
2018-01-01
Background The comparison of effectiveness and safety of anti-tumor necrosis factor-alpha agents for the treatment of inflammatory bowel disease (IBD) is relevant for clinical practice and stakeholders. Objective The objective of this study was to compare the risk of abdominal surgery, steroid utilization, and hospitalization for infection in Crohn’s disease (CD) or ulcerative colitis (UC) patients newly treated with infliximab (IFX) or adalimumab (ADA). Methods A retrospective population-based cohort study was performed using health information systems data from Lazio region, Italy. Patients with CD or UC diagnosis were enrolled at first prescription of IFX or ADA during 2008–2014 (index date). Only new drug users were followed for 2 years from the index date. IFX versus ADA adjusted hazard ratios were calculated applying “intention-to-treat” approach, controlling for several characteristics and stratifying the analysis on steroid use according to previous drug utilization. Sensitivity analyses were performed according to “as-treated” approach, adjusting for propensity score, censoring at switching or discontinuation, and evaluating different lengths of follow-up periods. Results We enrolled 1,432 IBD patients (42% and 83% exposed to IFX for CD and UC, respectively). In both diseases, treatment effects did not differ in any outcome considered, and sensitivity analyses confirmed the results from the main analysis. Conclusion In our population-based cohort study, effectiveness and safety data in new users of ADA or IFX with CD or UC were comparable for the outcomes we tested. PMID:29440933
Rubio-Terrés, C; Domínguez-Gil Hurlé, A
To carry out a cost-utility analysis of the treatment of relapsing-remitting multiple sclerosis (RRMS) with azathioprine (Imurel) or beta interferon (all, Avonex, Rebif and Betaferon). Pharmacoeconomic Markov model comparing treatment options by simulating the life of a hypothetical cohort of women aged 30, from the societal perspective. The transition probabilities, utilities, resource utilisation and costs (direct and indirect) were obtained from Spanish sources and from bibliography. Univariant sensitivity analyses of the base case were performed. In the base case analysis, the average cost per patient (euros in 2003) of a life treatment, considering a life expectancy of 53 years, would be 620,205, 1,047,836, 1,006,014, 1,161,638 and 968,157 euros with Imurel, all interferons, Avonex, Rebif and Betaferon, respectively. Therefore, the saving with Imurel would range between 327,000 and 520,000 euros approximately. The quality-adjusted life years (QALY) obtained with Imurel or interferons would be 10.08 and 9.30, respectively, with an average gain of 0.78 QALY per patient treated with Imurel. The sensitivity analyses confirmed the robustness of the base case. The cost of one additional QALY with interferons would range between 413,000 and 1,308,000 euros approximately in the hypothetical worst scenario for Imurel. For a typical patient with RRMS, treatment with Imurel would be more efficient than interferons and would dominate (would be more efficacious with lower costs) beta interferon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, C. L.; Funk, L. L.; Riedel, R. A.
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
Jahn, I; Foraita, R
2008-01-01
In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.
NASA Astrophysics Data System (ADS)
Weng, Hanli; Li, Youping
2017-04-01
The working principle, process device and test procedure of runner static balancing test method by weighting with three-pivot pressure transducers are introduced in this paper. Based on an actual instance of a V hydraulic turbine runner, the error and sensitivity of the three-pivot pressure transducer static balancing method are analysed. Suggestions about improving the accuracy and the application of the method are also proposed.
Kerhoulas, Lucy P; Kane, Jeffrey M
2012-01-01
Most dendrochronological studies focus on cores sampled from standard positions (main stem, breast height), yet vertical gradients in hydraulic constraints and priorities for carbon allocation may contribute to different growth sensitivities with position. Using cores taken from five positions (coarse roots, breast height, base of live crown, mid-crown branch and treetop), we investigated how radial growth sensitivity to climate over the period of 1895-2008 varies by position within 36 large ponderosa pines (Pinus ponderosa Dougl.) in northern Arizona. The climate parameters investigated were Palmer Drought Severity Index, water year and monsoon precipitation, maximum annual temperature, minimum annual temperature and average annual temperature. For each study tree, we generated Pearson correlation coefficients between ring width indices from each position and six climate parameters. We also investigated whether the number of missing rings differed among positions and bole heights. We found that tree density did not significantly influence climatic sensitivity to any of the climate parameters investigated at any of the sample positions. Results from three types of analyses suggest that climatic sensitivity of tree growth varied with position height: (i) correlations of radial growth and climate variables consistently increased with height; (ii) model strength based on Akaike's information criterion increased with height, where treetop growth consistently had the highest sensitivity and coarse roots the lowest sensitivity to each climatic parameter; and (iii) the correlation between bole ring width indices decreased with distance between positions. We speculate that increased sensitivity to climate at higher positions is related to hydraulic limitation because higher positions experience greater xylem tensions due to gravitational effects that render these positions more sensitive to climatic stresses. The low sensitivity of root growth to all climatic variables measured suggests that tree carbon allocation to coarse roots is independent of annual climate variability. The greater number of missing rings in branches highlights the fact that canopy development is a low priority for carbon allocation during poor growing conditions.
Luttjeboer, Jos; Setiawan, Didik; Cao, Qi; Cahh Daemen, Toos; Postma, Maarten J
2016-12-07
In this study, the potential price for a therapeutic vaccine against Human Papilloma Virus (HPV)-16 & 18 (pre)-malignant cervical lesions is examined. A decision tree model was built in the context of the new Dutch cervical cancer-screening program and includes a primary test for the presence of HPV. Based on data of cervical cancer screening and HPV prevalence in the Netherlands, cohorts were created with HPV-16 or 18 positive women with cervical intraepithelial neoplasia (CIN) 2 or 3 or cervical cancer stage 1A (FIGO 1A). In the base case, the vaccine price was based on equal numbers of effective treatments in the vaccine branch and the current treatments branch of the model, and parity in cost, i.e. total cost in both branches are the same. The vaccine price is calculated by subtracting the cost of the vaccine branch from cost in the standard treatment branch and divided by the total number of women in the cohort, thereby equalizing costs in both strategies. Scenario analyses were performed taking quality adjusted life years (QALYs) into account with €20,000/QALY, €50,000/QALY and €80,000/QALY as corresponding thresholds. Sensitivity analyses were specifically targeted at the characteristics of the type-specific HPV test in the screening practice and vaccine efficacy. A probabilistic sensitivity analysis (PSA) was performed to quantify the level of uncertainty of the results found in the base case. In the base case, break-even vaccine prices of €381, €568 and €1697 were found for CIN 2, CIN 3 and FIGO 1A, respectively. The PSA showed vaccine pricing below €310, €490 and €1660 will be cost saving with a likelihood of 95% for CIN 2, CIN 3 and FIGO 1A, respectively. The vaccine price proved to be very sensitive for inclusion of QALY gains, including the HPV-type specific test into the Dutch screening practice and vaccine efficacy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.
Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V
2017-06-01
Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.
Martiniano, Rui; McLaughlin, Russell; Silva, Nuno M.; Manco, Licinio; Pereira, Tania; Coelho, Maria J.; Serra, Miguel; Burger, Joachim; Parreira, Rui; Moran, Elena; Valera, Antonio C.; Silva, Ana M.
2017-01-01
We analyse new genomic data (0.05–2.95x) from 14 ancient individuals from Portugal distributed from the Middle Neolithic (4200–3500 BC) to the Middle Bronze Age (1740–1430 BC) and impute genomewide diploid genotypes in these together with published ancient Eurasians. While discontinuity is evident in the transition to agriculture across the region, sensitive haplotype-based analyses suggest a significant degree of local hunter-gatherer contribution to later Iberian Neolithic populations. A more subtle genetic influx is also apparent in the Bronze Age, detectable from analyses including haplotype sharing with both ancient and modern genomes, D-statistics and Y-chromosome lineages. However, the limited nature of this introgression contrasts with the major Steppe migration turnovers within third Millennium northern Europe and echoes the survival of non-Indo-European language in Iberia. Changes in genomic estimates of individual height across Europe are also associated with these major cultural transitions, and ancestral components continue to correlate with modern differences in stature. PMID:28749934
Fletcher, Carl; Sleeman, Richard; Luke, John; Luke, Peter; Bradley, James W
2018-03-01
The detection of explosives is of great importance, as is the need for sensitive, reliable techniques that require little or no sample preparation and short run times for high throughput analysis. In this work, a novel ionisation source is presented based on a dielectric barrier discharge (DBD). This not only affects desorption and ionisation but also forms an ionic wind, providing mass transportation of ions towards the mass spectrometer. Furthermore, the design incorporates 2 asymmetric alumina sheets, each containing 3 DBDs, so that a large surface area can be analysed. The DBD operates in ambient air, overcoming the limitation of other plasma-based techniques which typically analyse smaller surface areas and require solvents or gases. A range of explosives across 4 different functional groups was analysed using the DBD with low limits of detection for cyclotrimethylene trinitramine (RDX) (100 pg), pentaerythritol trinitrate (PETN) (100 pg), hexamethylene triperoxide diamide (HMTD) (1 ng), and trinitrotoluene (TNT) (5 ng). Detection was achieved without any sample preparation or the addition of reagents to facilitate adduct formation. Copyright © 2017 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Hayeon, E-mail: kimh2@upmc.edu; Rajagopalan, Malolan S.; Beriwal, Sushil
Purpose: Stereotactic body radiation therapy (SBRT) has been proposed for the palliation of painful vertebral bone metastases because higher radiation doses may result in superior and more durable pain control. A phase III clinical trial (Radiation Therapy Oncology Group 0631) comparing single fraction SBRT with single fraction external beam radiation therapy (EBRT) in palliative treatment of painful vertebral bone metastases is now ongoing. We performed a cost-effectiveness analysis to compare these strategies. Methods and Materials: A Markov model, using a 1-month cycle over a lifetime horizon, was developed to compare the cost-effectiveness of SBRT (16 or 18 Gy in 1 fraction)more » with that of 8 Gy in 1 fraction of EBRT. Transition probabilities, quality of life utilities, and costs associated with SBRT and EBRT were captured in the model. Costs were based on Medicare reimbursement in 2014. Strategies were compared using the incremental cost-effectiveness ratio (ICER), and effectiveness was measured in quality-adjusted life years (QALYs). To account for uncertainty, 1-way, 2-way and probabilistic sensitivity analyses were performed. Strategies were evaluated with a willingness-to-pay (WTP) threshold of $100,000 per QALY gained. Results: Base case pain relief after the treatment was assumed as 20% higher in SBRT. Base case treatment costs for SBRT and EBRT were $9000 and $1087, respectively. In the base case analysis, SBRT resulted in an ICER of $124,552 per QALY gained. In 1-way sensitivity analyses, results were most sensitive to variation of the utility of unrelieved pain; the utility of relieved pain after initial treatment and median survival were also sensitive to variation. If median survival is ≥11 months, SBRT cost <$100,000 per QALY gained. Conclusion: SBRT for palliation of vertebral bone metastases is not cost-effective compared with EBRT at a $100,000 per QALY gained WTP threshold. However, if median survival is ≥11 months, SBRT costs ≤$100,000 per QALY gained, suggesting that selective SBRT use in patients with longer expected survival may be the most cost-effective approach.« less
Use of the Analysis of the Volatile Faecal Metabolome in Screening for Colorectal Cancer
2015-01-01
Diagnosis of colorectal cancer is an invasive and expensive colonoscopy, which is usually carried out after a positive screening test. Unfortunately, existing screening tests lack specificity and sensitivity, hence many unnecessary colonoscopies are performed. Here we report on a potential new screening test for colorectal cancer based on the analysis of volatile organic compounds (VOCs) in the headspace of faecal samples. Faecal samples were obtained from subjects who had a positive faecal occult blood sample (FOBT). Subjects subsequently had colonoscopies performed to classify them into low risk (non-cancer) and high risk (colorectal cancer) groups. Volatile organic compounds were analysed by selected ion flow tube mass spectrometry (SIFT-MS) and then data were analysed using both univariate and multivariate statistical methods. Ions most likely from hydrogen sulphide, dimethyl sulphide and dimethyl disulphide are statistically significantly higher in samples from high risk rather than low risk subjects. Results using multivariate methods show that the test gives a correct classification of 75% with 78% specificity and 72% sensitivity on FOBT positive samples, offering a potentially effective alternative to FOBT. PMID:26086914
Mafe, Oluwakemi A T; Davies, Scott M; Hancock, John; Du, Chenyu
2015-01-01
This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly.
Results of an integrated structure-control law design sensitivity analysis
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1988-01-01
Next generation air and space vehicle designs are driven by increased performance requirements, demanding a high level of design integration between traditionally separate design disciplines. Interdisciplinary analysis capabilities have been developed, for aeroservoelastic aircraft and large flexible spacecraft control for instance, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Cost, Gaussian (LQG) optimal control laws, which predicts change in the optimal control law due to changes in fixed problem parameters using analytical sensitivity equations. Numerical results of a design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimally controlled aircraft's response to various problem formulation and physical aircraft parameters is determined. These results are used to predict the aircraft's new optimally controlled response if the parameter was to have some other nominal value during the control law design process. The sensitivity results are validated by recomputing the optimal control law for discrete variations in parameters, computing the new actual aircraft response, and comparing with the predicted response. These results show an improvement in sensitivity accuracy for integrated design purposes over methods which do not include changess in the optimal control law. Use of the analytical LQG sensitivity expressions is also shown to be more efficient that finite difference methods for the computation of the equivalent sensitivity information.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2012-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings.
Balsam, Joshua; Bruck, Hugh Alan; Kostov, Yordan; Rasooly, Avraham
2013-01-01
Optical technologies are important for biological analysis. Current biomedical optical analyses rely on high-cost, high-sensitivity optical detectors such as photomultipliers, avalanched photodiodes or cooled CCD cameras. In contrast, Webcams, mobile phones and other popular consumer electronics use lower-sensitivity, lower-cost optical components such as photodiodes or CMOS sensors. In order for consumer electronics devices, such as webcams, to be useful for biomedical analysis, they must have increased sensitivity. We combined two strategies to increase the sensitivity of CMOS-based fluorescence detector. We captured hundreds of low sensitivity images using a Webcam in video mode, instead of a single image typically used in cooled CCD devices.We then used a computational approach consisting of an image stacking algorithm to remove the noise by combining all of the images into a single image. While video mode is widely used for dynamic scene imaging (e.g. movies or time-lapse photography), it is not used to capture a single static image, which removes noise and increases sensitivity by more than thirty fold. The portable, battery-operated Webcam-based fluorometer system developed here consists of five modules: (1) a low cost CMOS Webcam to monitor light emission, (2) a plate to perform assays, (3) filters and multi-wavelength LED illuminator for fluorophore excitation, (4) a portable computer to acquire and analyze images, and (5) image stacking software for image enhancement. The samples consisted of various concentrations of fluorescein, ranging from 30 μM to 1000 μM, in a 36-well miniature plate. In the single frame mode, the fluorometer's limit-of-detection (LOD) for fluorescein is ∼1000 μM, which is relatively insensitive. However, when used in video mode combined with image stacking enhancement, the LOD is dramatically reduced to 30 μM, sensitivity which is similar to that of state-of-the-art ELISA plate photomultiplier-based readers. Numerous medical diagnostics assays rely on optical and fluorescence readers. Our novel combination of detection technologies, which is new to biodetection may enable the development of new low cost optical detectors based on an inexpensive Webcam (<$10). It has the potential to form the basis for high sensitivity, low cost medical diagnostics in resource-poor settings. PMID:23990697
The impact of prison-based treatment on sex offender recidivism: evidence from Minnesota.
Duwe, Grant; Goldman, Robin A
2009-09-01
Using a retrospective quasi-experimental design, this study evaluates the effectiveness of prison-based treatment by examining recidivism outcomes among 2,040 sex offenders released from Minnesota prisons between 1990 and 2003 (average follow-up period of 9.3 years). To reduce observed selection bias, the authors used propensity score matching to create a comparison group of 1,020 untreated sex offenders who were not significantly different from the 1,020 treated offenders. In addition, intent-to-treat analyses and the Rosenbaum bounds method were used to test the sensitivity of the findings to treatment refuser and unobserved selection bias. Results from the Cox regression analyses revealed that participating in treatment significantly reduced the hazard ratio for rearrest by 27% for sexual recidivism, 18% for violent recidivism, and 12% for general recidivism. These findings are consistent with the growing body of research supporting the effectiveness of cognitive-behavioral treatment for sex offenders.
Evolution of microbiological analytical methods for dairy industry needs
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry’s needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards. PMID:24570675
NASA Astrophysics Data System (ADS)
Breyer, Christian; Afanasyeva, Svetlana; Brakemeier, Dietmar; Engelhard, Manfred; Giuliano, Stefano; Puppe, Michael; Schenk, Heiko; Hirsch, Tobias; Moser, Massimo
2017-06-01
The main objective of this research is to present a solid foundation of capex projections for the major solar energy technologies until the year 2030 for further analyses. The experience curve approach has been chosen for this capex assessment, which requires a good understanding of the projected total global installed capacities of the major solar energy technologies and the respective learning rates. A literature survey has been conducted for CSP tower, CSP trough, PV and Li-ion battery. Based on the literature survey a base case has been defined for all technologies and low growth and high growth cases for further sensitivity analyses. All results are shown in detail in the paper and a comparison to the expectation of a potentially major investor in all of these technologies confirmed the derived capex projections in this paper.
ITOUGH2(UNIX). Inverse Modeling for TOUGH2 Family of Multiphase Flow Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, S.
1999-03-01
ITOUGH2 provides inverse modeling capabilities for the TOUGH2 family of numerical simulators for non-isothermal multiphase flows in fractured-porous media. The ITOUGH2 can be used for estimating parameters by automatic modeling calibration, for sensitivity analyses, and for uncertainity propagation analyses (linear and Monte Carlo simulations). Any input parameter to the TOUGH2 simulator can be estimated based on any type of observation for which a corresponding TOUGH2 output is calculated. ITOUGH2 solves a non-linear least-squares problem using direct or gradient-based minimization algorithms. A detailed residual and error analysis is performed, which includes the evaluation of model identification criteria. ITOUGH2 can also bemore » run in forward mode, solving subsurface flow problems related to nuclear waste isolation, oil, gas, and geothermal resevoir engineering, and vadose zone hydrology.« less
Evolution of microbiological analytical methods for dairy industry needs.
Sohier, Danièle; Pavan, Sonia; Riou, Armelle; Combrisson, Jérôme; Postollec, Florence
2014-01-01
Traditionally, culture-based methods have been used to enumerate microbial populations in dairy products. Recent developments in molecular methods now enable faster and more sensitive analyses than classical microbiology procedures. These molecular tools allow a detailed characterization of cell physiological states and bacterial fitness and thus, offer new perspectives to integration of microbial physiology monitoring to improve industrial processes. This review summarizes the methods described to enumerate and characterize physiological states of technological microbiota in dairy products, and discusses the current deficiencies in relation to the industry's needs. Recent studies show that Polymerase chain reaction-based methods can successfully be applied to quantify fermenting microbes and probiotics in dairy products. Flow cytometry and omics technologies also show interesting analytical potentialities. However, they still suffer from a lack of validation and standardization for quality control analyses, as reflected by the absence of performance studies and official international standards.
Multiplexed transcriptome analysis to detect ALK, ROS1 and RET rearrangements in lung cancer
Rogers, Toni-Maree; Arnau, Gisela Mir; Ryland, Georgina L.; Huang, Stephen; Lira, Maruja E.; Emmanuel, Yvette; Perez, Omar D.; Irwin, Darryl; Fellowes, Andrew P.; Wong, Stephen Q.; Fox, Stephen B.
2017-01-01
ALK, ROS1 and RET gene fusions are important predictive biomarkers for tyrosine kinase inhibitors in lung cancer. Currently, the gold standard method for gene fusion detection is Fluorescence In Situ Hybridization (FISH) and while highly sensitive and specific, it is also labour intensive, subjective in analysis, and unable to screen a large numbers of gene fusions. Recent developments in high-throughput transcriptome-based methods may provide a suitable alternative to FISH as they are compatible with multiplexing and diagnostic workflows. However, the concordance between these different methods compared with FISH has not been evaluated. In this study we compared the results from three transcriptome-based platforms (Nanostring Elements, Agena LungFusion panel and ThermoFisher NGS fusion panel) to those obtained from ALK, ROS1 and RET FISH on 51 clinical specimens. Overall agreement of results ranged from 86–96% depending on the platform used. While all platforms were highly sensitive, both the Agena panel and Thermo Fisher NGS fusion panel reported minor fusions that were not detectable by FISH. Our proof–of–principle study illustrates that transcriptome-based analyses are sensitive and robust methods for detecting actionable gene fusions in lung cancer and could provide a robust alternative to FISH testing in the diagnostic setting. PMID:28181564
Moret, Sabrina; Scolaro, Marianna; Barp, Laura; Purcaro, Giorgia; Conte, Lanfranco S
2016-04-01
A high throughput, high-sensitivity procedure, involving simultaneous microwave-assisted extraction (MAS) and unsaponifiable extraction, followed by on-line liquid chromatography (LC)-gas chromatography (GC), has been optimised for rapid and efficient extraction and analytical determination of mineral oil saturated hydrocarbons (MOSH) and mineral oil aromatic hydrocarbons (MOAH) in cereal-based products of different composition. MAS has the advantage of eliminating fat before LC-GC analysis, allowing an increase in the amount of sample extract injected, and hence in sensitivity. The proposed method gave practically quantitative recoveries and good repeatability. Among the different cereal-based products analysed (dry semolina and egg pasta, bread, biscuits, and cakes), egg pasta packed in direct contact with recycled paperboard had on average the highest total MOSH level (15.9 mg kg(-1)), followed by cakes (10.4 mg kg(-1)) and bread (7.5 mg kg(-1)). About 50% of the pasta and bread samples and 20% of the biscuits and cake samples had detectable MOAH amounts. The highest concentrations were found in an egg pasta in direct contact with recycled paperboard (3.6 mg kg(-1)) and in a milk bread (3.6 mg kg(-1)). Copyright © 2015 Elsevier Ltd. All rights reserved.
Schlauch, Robert C.; Crane, Cory A.; Houston, Rebecca J.; Molnar, Danielle S.; Schlienz, Nicolas J.; Lang, Alan R.
2015-01-01
The current project sought to examine the psychometric properties of a personality based measure (Substance Use Risk Profile Scale; SURPS: introversion-hopelessness, anxiety sensitivity, impulsivity, and sensation seeking) designed to differentially predict substance use preferences and patterns by matching primary personality-based motives for use to the specific effects of various psychoactive substances. Specifically, we sought to validate the SURPS in a clinical sample of substance users using cue reactivity methodology to assess current inclinations to consume a wide range of psychoactive substances. Using confirmatory factor analysis and correlational analyses, the SURPS demonstrated good psychometric properties and construct validity. Further, impulsivity and sensation-seeking were associated with use of multiple substances but could be differentiated by motives for use and susceptibility to the reinforcing effects of stimulants (i.e., impulsivity) and alcohol (i.e. sensation-seeking). In contrast, introversion-hopelessness and anxiety sensitivity demonstrated a pattern of use more focused on reducing negative affect, but were not differentiated based on specific patterns of use. Taken together, results suggests that among those receiving inpatient treatment for substance use disorders, the SURPS is a valid instrument for measuring four distinct personality dimensions that may be sensitive to motivational susceptibilities to specific patterns of alcohol and drug use. PMID:26052180
Yang, Zhongyi; Pan, Lingling; Cheng, Jingyi; Hu, Silong; Xu, Junyan; Ye, Dingwei; Zhang, Yingjian
2012-07-01
To investigate the value of whole-body fluorine-18 2-fluoro-2-deoxy-D-glucose positron emission tomography/computed tomography for the detection of metastatic bladder cancer. From December 2006 to August 2010, 60 bladder cancer patients (median age 60.5 years old, range 32-96) underwent whole body positron emission tomography/computed tomography positron emission tomography/computed tomography. The diagnostic accuracy was assessed by performing both organ-based and patient-based analyses. Identified lesions were further studied by biopsy or clinically followed for at least 6 months. One hundred and thirty-four suspicious lesions were identified. Among them, 4 primary cancers (2 pancreatic cancers, 1 colonic and 1 nasopharyngeal cancer) were incidentally detected, and the patients could be treated on time. For the remaining 130 lesions, positron emission tomography/computed tomography detected 118 true positive lesions (sensitivity = 95.9%). On the patient-based analysis, the overall sensitivity and specificity resulted to be 87.1% and 89.7%, respectively. There was no difference of sensitivity and specificity in patients with or without adjuvant treatment in terms of detection of metastatic sites by positron emission tomography/computed tomography. Compared with conventional imaging modality, positron emission tomography/computed tomography correctly changed the management in 15 patients (25.0%). Positron emission tomography/computed tomography has excellent sensitivity and specificity in the detection of metastatic bladder cancer and it provides additional diagnostic information compared to standard imaging techniques. © 2012 The Japanese Urological Association.
Zhang, Xiaojuan; Reeves, Daniel B; Perreard, Irina M; Kett, Warren C; Griswold, Karl E; Gimi, Barjor; Weaver, John B
2013-12-15
Functionalized magnetic nanoparticles (mNPs) have shown promise in biosensing and other biomedical applications. Here we use functionalized mNPs to develop a highly sensitive, versatile sensing strategy required in practical biological assays and potentially in vivo analysis. We demonstrate a new sensing scheme based on magnetic spectroscopy of nanoparticle Brownian motion (MSB) to quantitatively detect molecular targets. MSB uses the harmonics of oscillating mNPs as a metric for the freedom of rotational motion, thus reflecting the bound state of the mNP. The harmonics can be detected in vivo from nanogram quantities of iron within 5s. Using a streptavidin-biotin binding system, we show that the detection limit of the current MSB technique is lower than 150 pM (0.075 pmole), which is much more sensitive than previously reported techniques based on mNP detection. Using mNPs conjugated with two anti-thrombin DNA aptamers, we show that thrombin can be detected with high sensitivity (4 nM or 2 pmole). A DNA-DNA interaction was also investigated. The results demonstrated that sequence selective DNA detection can be achieved with 100 pM (0.05 pmole) sensitivity. The results of using MSB to sense these interactions, show that the MSB based sensing technique can achieve rapid measurement (within 10s), and is suitable for detecting and quantifying a wide range of biomarkers or analytes. It has the potential to be applied in variety of biomedical applications or diagnostic analyses. © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Anthony, Jason L.; Lonigan, Christopher J.; Burgess, Stephen R.; Driscoll, Kimberly; Phillips, Beth M.; Cantor, Brenlee G.
2002-01-01
This study examined relations among sensitivity to words, syllables, rhymes, and phonemes in older and younger preschoolers. Confirmatory factor analyses found that a one-factor model best explained the date from both groups of children. Only variance common to all phonological sensitivity skills was related to print knowledge and rudimentary…
Agent of whirling disease meets orphan worm: phylogenomic analyses firmly place Myxozoa in Cnidaria.
Nesnidal, Maximilian P; Helmkampf, Martin; Bruchhaus, Iris; El-Matbouli, Mansour; Hausdorf, Bernhard
2013-01-01
Myxozoa are microscopic obligate endoparasites with complex live cycles. Representatives are Myxobolus cerebralis, the causative agent of whirling disease in salmonids, and the enigmatic "orphan worm" Buddenbrockia plumatellae parasitizing in Bryozoa. Originally, Myxozoa were classified as protists, but later several metazoan characteristics were reported. However, their phylogenetic relationships remained doubtful. Some molecular phylogenetic analyses placed them as sister group to or even within Bilateria, whereas the possession of polar capsules that are similar to nematocysts of Cnidaria and of minicollagen genes suggest a close relationship between Myxozoa and Cnidaria. EST data of Buddenbrockia also indicated a cnidarian origin of Myxozoa, but were not sufficient to reject a closer relationship to bilaterians. Phylogenomic analyses of new genomic sequences of Myxobolus cerebralis firmly place Myxozoa as sister group to Medusozoa within Cnidaria. Based on the new dataset, the alternative hypothesis that Myxozoa form a clade with Bilateria can be rejected using topology tests. Sensitivity analyses indicate that this result is not affected by long branch attraction artifacts or compositional bias.
Sensitivity of water resources in the Delaware River basin to climate variability and change
Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.
1994-01-01
Because of the greenhouse effect, projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climate change; and presents the results of sensitivity analyses of how climate change might affect water resources in the Delaware River basin. Sensitivity analyses suggest that potentially serious shortfalls of certain water resources in the basin could result if some scenarios for climate change come true . The results of model simulations of the basin streamflow demonstrate the difficulty in distinguishing the effects that climate change versus natural climate variability have on streamflow and water supply . The future direction of basin changes in most water resources, furthermore, cannot be precisely determined because of uncertainty in current projections of regional temperature and precipitation . This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant . The sensitivity analyses could be useful in developing contingency plans for evaluating and responding to changes, should they occur.
Mayer, M L; Rozier, R G
2000-08-01
This analysis questions the appropriateness of inflation adjustment in analyses of provider behavior by comparing results from estimations using adjusted financial variables with those from estimations using unadjusted financial variables. Using Medicaid claims from 1984-1991, we explored the effects of Medicaid reimbursement increases on dentists' participation. Using results from inflation adjusted analyses, we would conclude that a 23% nominal increase in Medicaid reimbursement rates yields no increase in the number of Medicaid children seen by dentists. In contrast, estimations based on unadjusted reimbursement rates suggest that this same 23% nominal increase in reimbursement leads to an expected 16-person (15.4%) increase in the number of Medicaid patients seen per provider per year. These analyses demonstrate that results are sensitive to adjustment for inflation. While adjusting for inflation is a generally accepted practice in health services research, doing so without evidence that providers respond to adjusted reimbursement may be unjustified. More research is needed to determine the appropriateness of inflation adjustment in analyses of provider behavior, and the circumstances under which it should or should not be done.
Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak
2015-06-01
Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study
Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley
2011-01-01
Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400
Houssin, Timothée; Cramer, Jérémy; Grojsman, Rébecca; Bellahsene, Lyes; Colas, Guillaume; Moulet, Hélène; Minnella, Walter; Pannetier, Christophe; Leberre, Maël; Plecis, Adrien; Chen, Yong
2016-04-21
To control future infectious disease outbreaks, like the 2014 Ebola epidemic, it is necessary to develop ultrafast molecular assays enabling rapid and sensitive diagnoses. To that end, several ultrafast real-time PCR systems have been previously developed, but they present issues that hinder their wide adoption, notably regarding their sensitivity and detection volume. An ultrafast, sensitive and large-volume real-time PCR system based on microfluidic thermalization is presented herein. The method is based on the circulation of pre-heated liquids in a microfluidic chip that thermalize the PCR chamber by diffusion and ultrafast flow switches. The system can achieve up to 30 real-time PCR cycles in around 2 minutes, which makes it the fastest PCR thermalization system for regular sample volume to the best of our knowledge. After biochemical optimization, anthrax and Ebola simulating agents could be respectively detected by a real-time PCR in 7 minutes and a reverse transcription real-time PCR in 7.5 minutes. These detections are respectively 6.4 and 7.2 times faster than with an off-the-shelf apparatus, while conserving real-time PCR sample volume, efficiency, selectivity and sensitivity. The high-speed thermalization also enabled us to perform sharp melting curve analyses in only 20 s and to discriminate amplicons of different lengths by rapid real-time PCR. This real-time PCR microfluidic thermalization system is cost-effective, versatile and can be then further developed for point-of-care, multiplexed, ultrafast and highly sensitive molecular diagnoses of bacterial and viral diseases.
Accuracy of dementia diagnosis: a direct comparison between radiologists and a computerized method.
Klöppel, Stefan; Stonnington, Cynthia M; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D; Mader, Irina; Mitchell, L Anne; Patel, Ameet C; Roberts, Catherine C; Fox, Nick C; Jack, Clifford R; Ashburner, John; Frackowiak, Richard S J
2008-11-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65-95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice.
Accuracy of dementia diagnosis—a direct comparison between radiologists and a computerized method
Stonnington, Cynthia M.; Barnes, Josephine; Chen, Frederick; Chu, Carlton; Good, Catriona D.; Mader, Irina; Mitchell, L. Anne; Patel, Ameet C.; Roberts, Catherine C.; Fox, Nick C.; Jack, Clifford R.; Ashburner, John; Frackowiak, Richard S. J.
2008-01-01
There has been recent interest in the application of machine learning techniques to neuroimaging-based diagnosis. These methods promise fully automated, standard PC-based clinical decisions, unbiased by variable radiological expertise. We recently used support vector machines (SVMs) to separate sporadic Alzheimer's disease from normal ageing and from fronto-temporal lobar degeneration (FTLD). In this study, we compare the results to those obtained by radiologists. A binary diagnostic classification was made by six radiologists with different levels of experience on the same scans and information that had been previously analysed with SVM. SVMs correctly classified 95% (sensitivity/specificity: 95/95) of sporadic Alzheimer's disease and controls into their respective groups. Radiologists correctly classified 65–95% (median 89%; sensitivity/specificity: 88/90) of scans. SVM correctly classified another set of sporadic Alzheimer's disease in 93% (sensitivity/specificity: 100/86) of cases, whereas radiologists ranged between 80% and 90% (median 83%; sensitivity/specificity: 80/85). SVMs were better at separating patients with sporadic Alzheimer's disease from those with FTLD (SVM 89%; sensitivity/specificity: 83/95; compared to radiological range from 63% to 83%; median 71%; sensitivity/specificity: 64/76). Radiologists were always accurate when they reported a high degree of diagnostic confidence. The results show that well-trained neuroradiologists classify typical Alzheimer's disease-associated scans comparable to SVMs. However, SVMs require no expert knowledge and trained SVMs can readily be exchanged between centres for use in diagnostic classification. These results are encouraging and indicate a role for computerized diagnostic methods in clinical practice. PMID:18835868
HYPNOTIC TACTILE ANESTHESIA: Psychophysical and Signal-Detection Analyses
Tataryn, Douglas J.; Kihlstrom, John F.
2017-01-01
Two experiments that studied the effects of hypnotic suggestions on tactile sensitivity are reported. Experiment 1 found that suggestions for anesthesia, as measured by both traditional psychophysical methods and signal detection procedures, were linearly related to hypnotizability. Experiment 2 employed the same methodologies in an application of the real-simulator paradigm to examine the effects of suggestions for both anesthesia and hyperesthesia. Significant effects of hypnotic suggestion on both sensitivity and bias were found in the anesthesia condition but not for the hyperesthesia condition. A new bias parameter, C′, indicated that much of the bias found in the initial analyses was artifactual, a function of changes in sensitivity across conditions. There were no behavioral differences between reals and simulators in any of the conditions, though analyses of postexperimental interviews suggested the 2 groups had very different phenomenal experiences. PMID:28230465
Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J
2014-12-01
This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.
Chatterjee, Abhishek; Macarios, David; Griffin, Leah; Kosowski, Tomasz; Pyfer, Bryan J; Offodile, Anaeze C; Driscoll, Daniel; Maddali, Sirish; Attwood, John
2015-11-01
Sartorius flap coverage and adjunctive negative pressure wound therapy (NPWT) have been described in managing infected vascular groin grafts with varying cost and clinical success. We performed a cost-utility analysis comparing sartorius flap with NPWT in managing an infected vascular groin graft. A literature review compiling outcomes for sartorius flap and NPWT interventions was conducted from peer-reviewed journals in MEDLINE (PubMed) and EMBASE. Utility scores were derived from expert opinion and used to estimate quality-adjusted life years (QALYs). Medicare current procedure terminology and diagnosis-related groups codes were used to assess the costs for successful graft salvage with the associated complications. Incremental cost-effectiveness was assessed at $50,000/QALY, and both univariate and probabilistic sensitivity analyses were conducted to assess robustness of the conclusions. Thirty-two studies were used pooling 384 patients (234 sartorius flaps and 150 NPWT). NPWT had better clinical outcomes (86.7% success rate, 0.9% minor complication rate, and 13.3% major complication rate) than sartorius flap (81.6% success rate, 8.0% minor complication rate, and 18.4% major complication rate). NPWT was less costly ($12,366 versus $23,516) and slightly more effective (12.06 QALY versus 12.05 QALY) compared with sartorius flap. Sensitivity analyses confirmed the robustness of the base case findings; NPWT was either cost-effective at $50,000/QALY or dominated sartorius flap in 81.6% of all probabilistic sensitivity analyses. In our cost-utility analysis, use of adjunctive NPWT, along with debridement and antibiotic treatment, for managing infected vascular groin graft wounds was found to be a more cost-effective option when compared with sartorius flaps.
Macarios, David; Griffin, Leah; Kosowski, Tomasz; Pyfer, Bryan J.; Offodile, Anaeze C.; Driscoll, Daniel; Maddali, Sirish; Attwood, John
2015-01-01
Background: Sartorius flap coverage and adjunctive negative pressure wound therapy (NPWT) have been described in managing infected vascular groin grafts with varying cost and clinical success. We performed a cost–utility analysis comparing sartorius flap with NPWT in managing an infected vascular groin graft. Methods: A literature review compiling outcomes for sartorius flap and NPWT interventions was conducted from peer-reviewed journals in MEDLINE (PubMed) and EMBASE. Utility scores were derived from expert opinion and used to estimate quality-adjusted life years (QALYs). Medicare current procedure terminology and diagnosis-related groups codes were used to assess the costs for successful graft salvage with the associated complications. Incremental cost-effectiveness was assessed at $50,000/QALY, and both univariate and probabilistic sensitivity analyses were conducted to assess robustness of the conclusions. Results: Thirty-two studies were used pooling 384 patients (234 sartorius flaps and 150 NPWT). NPWT had better clinical outcomes (86.7% success rate, 0.9% minor complication rate, and 13.3% major complication rate) than sartorius flap (81.6% success rate, 8.0% minor complication rate, and 18.4% major complication rate). NPWT was less costly ($12,366 versus $23,516) and slightly more effective (12.06 QALY versus 12.05 QALY) compared with sartorius flap. Sensitivity analyses confirmed the robustness of the base case findings; NPWT was either cost-effective at $50,000/QALY or dominated sartorius flap in 81.6% of all probabilistic sensitivity analyses. Conclusion: In our cost–utility analysis, use of adjunctive NPWT, along with debridement and antibiotic treatment, for managing infected vascular groin graft wounds was found to be a more cost-effective option when compared with sartorius flaps. PMID:26893991
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
Cost-effectiveness of pharmacist-participated warfarin therapy management in Thailand.
Saokaew, Surasak; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn; Nathisuwan, Surakit; Sukonthasarn, Apichard; Jeanpeerapong, Napawan
2013-10-01
Although pharmacist-participated warfarin therapy management (PWTM) is well established, the economic evaluation of PWTM is still lacking particularly in Asia-Pacific region. The objective of this study was to estimate the cost-effectiveness of PWTM in Thailand using local data where available. A Markov model was used to compare lifetime costs and quality-adjusted life years (QALYs) accrued to patients receiving warfarin therapy through PWTM or usual care (UC). The model was populated with relevant information from both health care system and societal perspectives. Input data were obtained from literatures and database analyses. Incremental cost-effectiveness ratios (ICERs) were presented as year 2012 values. A base-case analysis was performed for patients at age 45 years old. Sensitivity analyses including one-way and probabilistic sensitivity analyses were constructed to determine the robustness of the findings. From societal perspective, PWTM and UC results in 39.5 and 38.7 QALY, respectively. Thus, PWTM increase QALY by 0.79, and increase costs by 92,491 THB (3,083 USD) compared with UC (ICER 116,468 THB [3,882.3 USD] per QALY gained). While, from health care system perspective, PWTM also results in 0.79 QALY, and increase costs by 92,788 THB (3,093 USD) compared with UC (ICER 116,842 THB [3,894.7 USD] per QALY gained). Thus, PWTM was cost-effective compared with usual care, assuming willingness-to-pay (WTP) of 150,000 THB/QALY. Results were sensitive to the discount rate and cost of clinic set-up. Our finding suggests that PWTM is a cost-effective intervention. Policy-makers may consider our finding as part of information in their decision-making for implementing this strategy into healthcare benefit package. Further updates when additional data available are needed. © 2013.
Novel image encryption algorithm based on multiple-parameter discrete fractional random transform
NASA Astrophysics Data System (ADS)
Zhou, Nanrun; Dong, Taiji; Wu, Jianhua
2010-08-01
A new method of digital image encryption is presented by utilizing a new multiple-parameter discrete fractional random transform. Image encryption and decryption are performed based on the index additivity and multiple parameters of the multiple-parameter fractional random transform. The plaintext and ciphertext are respectively in the spatial domain and in the fractional domain determined by the encryption keys. The proposed algorithm can resist statistic analyses effectively. The computer simulation results show that the proposed encryption algorithm is sensitive to the multiple keys, and that it has considerable robustness, noise immunity and security.
Holocaust exposure and subsequent suicide risk: a population-based study.
Bursztein Lipsicas, Cendrine; Levav, Itzhak; Levine, Stephen Z
2017-03-01
To examine the association between the extent of genocide exposure and subsequent suicide risk among Holocaust survivors. Persons born in Holocaust-exposed European countries during the years 1922-1945 that immigrated to Israel by 1965 were identified in the Population Registry (N = 209,429), and followed up for suicide (1950-2014). They were divided into three groups based on likely exposure to Nazi persecution: those who immigrated before (indirect; n = 20,229; 10%), during (partial direct; n = 17,189; 8%), and after (full direct; n = 172,061; 82%) World War II. Groups were contrasted for suicide risk, accounting for the extent of genocide in their respective countries of origin, high (>70%) or lower levels (<50%). Cox model survival analyses were computed examining calendar year at suicide. Sensitivity analyses were recomputed for two additional suicide-associated variables (age and years since immigration) for each exposure group. All analyses were adjusted for confounders. Survival analysis showed that compared to the indirect exposure group, the partial direct exposure group from countries with high genocide level had a statistically significant (P < .05) increased suicide risk for the main outcome (calendar year: HR 1.78, 95% CI 1.09, 2.90). This effect significantly (P < .05) replicated in two sensitivity analyses for countries with higher relative levels of genocide (age: HR 1.77, 95% CI 1.09, 2.89; years since immigration: HR 1.85, 95% CI 1.14, 3.02). The full direct exposure group was not at significant suicide risk compared to the indirect exposure group. Suicide associations for groups from countries with relative lower level of genocide were not statistically significant. This study partly converges with findings identifying Holocaust survivors (full direct exposure) as a resilient group. A tentative mechanism for higher vulnerability to suicide risk of the partial direct exposure group from countries with higher genocide exposure includes protracted guilt feelings, having directly witnessed atrocities and escaped death.
Pingault, Jean-Baptiste; Côté, Sylvana M.; Lacourse, Eric; Galéra, Cédric; Vitaro, Frank; Tremblay, Richard E.
2013-01-01
Background Research shows that children with Attention Deficit/Hyperactivity Disorder are at elevated risk of criminality. However, several issues still need to be addressed in order to verify whether hyperactivity in itself plays a role in the prediction of criminality. In particular, co-occurrence with other behaviors as well as the internal heterogeneity in ADHD symptoms (hyperactivity and inattention) should be taken into account. The aim of this study was to assess the unique and interactive contributions of hyperactivity to the development of criminality, whilst considering inattention, physical aggression and family adversity. Methodology/Principal Findings We monitored the development of a population-based sample of kindergarten children (N = 2,741). Hyperactivity, inattention, and physical aggression were assessed annually between the ages of 6 and 12 years by mothers and teachers. Information on the presence, the age at first charge and the type of criminal charge was obtained from official records when the participants were aged 25 years. We used survival analysis models to predict the development of criminality in adolescence and adulthood: high childhood hyperactivity was highly predictive when bivariate analyses were used; however, with multivariate analyses, high hyperactivity was only marginally significant (Hazard Ratio: 1.38; 95% CI: 0.94–2.02). Sensitivity analyses revealed that hyperactivity was not a consistent predictor. High physical aggression was strongly predictive (Hazard Ratio: 3.44; 95% CI: 2.43–4.87) and its role was consistent in sensitivity analyses and for different types of crime. Inattention was not predictive of later criminality. Conclusions/Significance Although the contribution of childhood hyperactivity to criminality may be detected in large samples using multi-informant longitudinal designs, our results show that it is not a strong predictor of later criminality. Crime prevention should instead target children with the highest levels of childhood physical aggression and family adversity. PMID:23658752
Alshreef, Abualbishr; Wailoo, Allan J; Brown, Steven R; Tiernan, James P; Watson, Angus J M; Biggs, Katie; Bradburn, Mike; Hind, Daniel
2017-09-01
Haemorrhoids are a common condition, with nearly 30,000 procedures carried out in England in 2014/15, and result in a significant quality-of-life burden to patients and a financial burden to the healthcare system. This study examined the cost effectiveness of haemorrhoidal artery ligation (HAL) compared with rubber band ligation (RBL) in the treatment of grade II-III haemorrhoids. This analyses used data from the HubBLe study, a multicentre, open-label, parallel group, randomised controlled trial conducted in 17 acute UK hospitals between September 2012 and August 2015. A full economic evaluation, including long-term cost effectiveness, was conducted from the UK National Health Service (NHS) perspective. Main outcomes included healthcare costs, quality-adjusted life-years (QALYs) and recurrence. Cost-effectiveness results were presented in terms of incremental cost per QALY gained and cost per recurrence avoided. Extrapolation analysis for 3 years beyond the trial follow-up, two subgroup analyses (by grade of haemorrhoids and recurrence following RBL at baseline), and various sensitivity analyses were undertaken. In the primary base-case within-trial analysis, the incremental total mean cost per patient for HAL compared with RBL was £1027 (95% confidence interval [CI] £782-£1272, p < 0.001). The incremental QALYs were 0.01 QALYs (95% CI -0.02 to 0.04, p = 0.49). This generated an incremental cost-effectiveness ratio (ICER) of £104,427 per QALY. In the extrapolation analysis, the estimated probabilistic ICER was £21,798 per QALY. Results from all subgroup and sensitivity analyses did not materially change the base-case result. Under all assessed scenarios, the HAL procedure was not cost effective compared with RBL for the treatment of grade II-III haemorrhoids at a cost-effectiveness threshold of £20,000 per QALY; therefore, economically, its use in the NHS should be questioned.
Ariza, R; Van Walsem, A; Canal, C; Roldán, C; Betegón, L; Oyagüez, I; Janssen, K
2014-07-01
To compare the cost of treating rheumatoid arthritis patients that have failed an initial treatment with methotrexate, with subcutaneous abatacept versus other first-line biologic disease-modifying antirheumatic drugs. Subcutaneous abatacept was considered comparable to intravenous abatacept, adalimumab, certolizumab pegol, etanercept, golimumab, infliximab and tocilizumab, based on indirect comparison using mixed treatment analysis. A cost-minimization analysis was therefore considered appropriate. The Spanish Health System perspective and a 3 year time horizon were selected. Pharmaceutical and administration costs (Euros 2013) of all available first-line biological disease-modifying antirheumatic drugs were considered. Administration costs were obtained from a local costs database. Patients were considered to have a weight of 70 kg. A 3% annual discount rate was applied. Deterministic and probabilistic sensitivity analyses were performed. Subcutaneous abatacept proved in the base case to be less costly than all other biologic antirrheumatic drugs (ranging from Euros -831.42 to Euros -9,741.69 versus infliximab and tocilizumab, respectively). Subcutaneous abatacept was associated with a cost of Euros 10,760.41 per patient during the first year of treatment and Euros 10,261.29 in subsequent years. The total 3-year cost of subcutaneous abatacept was Euros 29,953.89 per patient. Sensitivity analyses proved the model to be robust. Subcutaneous abatacept remained cost-saving in 100% of probabilistic sensitivity analysis simulations versus adalimumab, certolizumab, etanercept and golimumab, in more than 99.6% versus intravenous abatacept and tocilizumab and in 62.3% versus infliximab. Treatment with subcutaneous abatacept is cost-saving versus intravenous abatacept, adalimumab, certolizumab, etanercept, golimumab, infliximab and tocilizumab in the management of rheumatoid arthritis patients initiating treatment with biological antirheumatic drugs. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Pang, Y-K; Ip, M; You, J H S
2017-01-01
Early initiation of antifungal treatment for invasive candidiasis is associated with change in mortality. Beta-D-glucan (BDG) is a fungal cell wall component and a serum diagnostic biomarker of fungal infection. Clinical findings suggested an association between reduced invasive candidiasis incidence in intensive care units (ICUs) and BDG-guided preemptive antifungal therapy. We evaluated the potential cost-effectiveness of active BDG surveillance with preemptive antifungal therapy in patients admitted to adult ICUs from the perspective of Hong Kong healthcare providers. A Markov model was designed to simulate the outcomes of active BDG surveillance with preemptive therapy (surveillance group) and no surveillance (standard care group). Candidiasis-associated outcome measures included mortality rate, quality-adjusted life year (QALY) loss, and direct medical cost. Model inputs were derived from the literature. Sensitivity analyses were conducted to evaluate the robustness of model results. In base-case analysis, the surveillance group was more costly (1387 USD versus 664 USD) (1 USD = 7.8 HKD), with lower candidiasis-associated mortality rate (0.653 versus 1.426 per 100 ICU admissions) and QALY loss (0.116 versus 0.254) than the standard care group. The incremental cost per QALY saved by the surveillance group was 5239 USD/QALY. One-way sensitivity analyses found base-case results to be robust to variations of all model inputs. In probabilistic sensitivity analysis, the surveillance group was cost-effective in 50 % and 100 % of 10,000 Monte Carlo simulations at willingness-to-pay (WTP) thresholds of 7200 USD/QALY and ≥27,800 USD/QALY, respectively. Active BDG surveillance with preemptive therapy appears to be highly cost-effective to reduce the candidiasis-associated mortality rate and save QALYs in the ICU setting.
A 14-3-3 Family Protein from Wild Soybean (Glycine Soja) Regulates ABA Sensitivity in Arabidopsis
Sun, Xiaoli; Sun, Mingzhe; Jia, Bowei; Chen, Chao; Qin, Zhiwei; Yang, Kejun; Shen, Yang; Meiping, Zhang; Mingyang, Cong; Zhu, Yanming
2015-01-01
It is widely accepted that the 14-3-3 family proteins are key regulators of multiple stress signal transduction cascades. By conducting genome-wide analysis, researchers have identified the soybean 14-3-3 family proteins; however, until now, there is still no direct genetic evidence showing the involvement of soybean 14-3-3s in ABA responses. Hence, in this study, based on the latest Glycine max genome on Phytozome v10.3, we initially analyzed the evolutionary relationship, genome organization, gene structure and duplication, and three-dimensional structure of soybean 14-3-3 family proteins systematically. Our results suggested that soybean 14-3-3 family was highly evolutionary conserved and possessed segmental duplication in evolution. Then, based on our previous functional characterization of a Glycine soja 14-3-3 protein GsGF14o in drought stress responses, we further investigated the expression characteristics of GsGF14o in detail, and demonstrated its positive roles in ABA sensitivity. Quantitative real-time PCR analyses in Glycine soja seedlings and GUS activity assays in PGsGF14O:GUS transgenic Arabidopsis showed that GsGF14o expression was moderately and rapidly induced by ABA treatment. As expected, GsGF14o overexpression in Arabidopsis augmented the ABA inhibition of seed germination and seedling growth, promoted the ABA induced stomata closure, and up-regulated the expression levels of ABA induced genes. Moreover, through yeast two hybrid analyses, we further demonstrated that GsGF14o physically interacted with the AREB/ABF transcription factors in yeast cells. Taken together, results presented in this study strongly suggested that GsGF14o played an important role in regulation of ABA sensitivity in Arabidopsis. PMID:26717241
Pan, Tao; Guo, Jin-He; Ling, Long; Qian, Yue; Dong, Yong-Hua; Yin, Hua-Qing; Zhu, Hai-Dong; Teng, Gao-Jun
2018-05-01
To evaluate the effects of multi-electrode catheter-based renal denervation (RDN) on insulin sensitivity and glucose metabolism in a type 2 diabetes mellitus (T2DM) canine model. Thirty-three dogs were divided equally into 3 groups: bilateral renal denervation (BRDN) group, left renal denervation (LRDN) group, and sham operation (SHAM) group. Body weight and blood biochemistry were measured at baseline, 20 weeks, and 32 weeks, and renal angiography and computerized tomographic (CT) angiography were determined before the procedure and 1 month, 2 months, and 3 months after the procedure. Western blot was used to identify the activities of gluconeogenic enzymes and insulin-signaling proteins. Fasting plasma glucose (9.64 ± 1.57 mmol/L vs 5.12 ± 1.08 mmol/L; P < .0001), fasting insulin (16.19 ± 1.43 mIU/mL vs 5.07 ± 1.13 mIU/mL; P < .0001), and homeostasis-model assessment of insulin resistance (HOMA-IR; 6.95 ± 1.33 vs 1.15 ± 0.33; P < .0001) in the BRDN group had significantly decreased at the 3-month follow-up compared with the SHAM group. Western blot analyses showed that RDN suppressed the gluconeogenetic genes, modulated insulin action, and activated insulin receptors-AKT signaling cascade in the liver. CT angiography and histopathologic analyses did not show any dissection, aneurysm, thrombus, or rupture in any of the renal arteries. These findings identified that multi-electrode catheter-based RDN could effectively decrease gluconeogenesis and glycogenolysis, resulting in improvements in insulin sensitivity and glucose metabolism in a T2DM canine model. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.
Anglin, Deidre M; Greenspoon, Michelle; Lighty, Quenesha; Ellman, Lauren M
2016-10-01
Self-reported experiences of racial discrimination have been associated with a continuum of psychotic experiences in racial and ethnic minority populations; however, the underlying mechanisms of this relationship are not yet clear. Race-based rejection sensitivity (RS-race) has been associated with thought intrusions about being the target of racial discrimination; therefore, the present study aimed to determine whether RS-race accounts for the relationship between racial discrimination and psychotic-like experiences in racial and ethnic minority populations. A sample of 644 young adults from a US urban, predominantly immigrant, and racial and ethnic minority population was administered a self-report inventory of psychosis risk (i.e. Prodromal Questionnaire (PQ) ), providing a dimensional assessment of the total number of attenuated positive psychotic symptoms experienced as distressing (APPS-distress). Participants also completed the Experiences of Discrimination Questionnaire and the Rejection Sensitivity Questionnaire-Race. Hierarchical linear regression analyses revealed that RS-race and racial discrimination were both significantly related to higher levels of APPS-distress. Bootstrapping analyses of indirect effects indicated that RS-race partially accounted for the relationship between racial discrimination and APPS-distress. Although the cross-sectional nature of the data limits conclusions about causal inference, our findings do suggest that racial discrimination and RS-race may both be important for understanding risk for distress in the psychotic spectrum among racial and ethnic minority young adults. Some individuals who report racial discrimination may be more vulnerable to APPS-distress in part because they are anxiously anticipating being racially slighted, and this should be explored further in prospective clinical high-risk studies. © 2014 Wiley Publishing Asia Pty Ltd.
Schulte-Braucks, Julia; Baethge, Anja; Dormann, Christian; Vahle-Hinz, Tim
2018-04-23
We proposed that effects of illegitimate tasks, which comprise unreasonable and unnecessary tasks, on self-esteem and counterproductive work behavior (CWB) are enhanced among employees who are highly sensitive to injustice. CWB was further proposed to be a moderating coping strategy, which restores justice and buffers the detrimental effects of illegitimate tasks on self-esteem. In this study, 241 employees participated in a diary study over five workdays and a follow-up questionnaire one week later. Daily effects were determined in multilevel analyses: Unreasonable tasks decreased self-esteem and increased CWB the same day, especially among employees high in trait justice sensitivity. Unnecessary tasks only related to more CWB the same day, regardless of one's justice sensitivity. Weekly effects were determined in cross-lagged panel analyses: Unreasonable and unnecessary tasks increased CWB, and justice sensitivity moderated the effect of unreasonable tasks on CWB and of unnecessary tasks on self-esteem. Moderating effects of CWB were split: In daily analyses, CWB buffered the negative effects of illegitimate tasks. In weekly analyses, CWB enhanced the negative effects of illegitimate tasks. Overall, illegitimate tasks rather affected CWB than self-esteem, with more consistent effects for unreasonable than for unnecessary tasks. Thus, we confirm illegitimate tasks as a relevant work stressor with issues of injustice being central to this concept and personality having an influence on what is perceived as (il)legitimate. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Pignata, Maud; Chouaid, Christos; Le Lay, Katell; Luciani, Laura; McConnachie, Ceilidh; Gordon, James; Roze, Stéphane
2017-01-01
Background and aims Lung cancer has the highest mortality rate of all cancers worldwide. Non-small-cell lung cancer (NSCLC) accounts for 85% of all lung cancers and has an extremely poor prognosis. Afatinib is an irreversible ErbB family blocker designed to suppress cellular signaling and inhibit cellular growth and is approved in Europe after platinum-based therapy for squamous NSCLC. The objective of the present analysis was to evaluate the cost-effectiveness of afatinib after platinum-based therapy for squamous NSCLC in France. Methods The study population was based on the LUX-Lung 8 trial that compared afatinib with erlotinib in patients with squamous NSCLC. The analysis was performed from the perspective of all health care funders and affected patients. A partitioned survival model was developed to evaluate cost-effectiveness based on progression-free survival and overall survival in the trial. Life expectancy, quality-adjusted life expectancy and direct costs were evaluated over a 10-year time horizon. Future costs and clinical benefits were discounted at 4% annually. Deterministic and probabilistic sensitivity analyses were performed. Results Model projections indicated that afatinib was associated with greater life expectancy (0.16 years) and quality-adjusted life expectancy (0.094 quality-adjusted life years [QALYs]) than that projected for erlotinib. The total cost of treatment over a 10-year time horizon was higher for afatinib than erlotinib, EUR12,364 versus EUR9,510, leading to an incremental cost-effectiveness ratio of EUR30,277 per QALY gained for afatinib versus erlotinib. Sensitivity analyses showed that the base case findings were stable under variation of a range of model inputs. Conclusion Based on data from the LUX-Lung 8 trial, afatinib was projected to improve clinical outcomes versus erlotinib, with a 97% probability of being cost-effective assuming a willingness to pay of EUR70,000 per QALY gained, after platinum-based therapy in patients with squamous NSCLC in France. PMID:29123418
Eberle, Jonas; Warnock, Rachel C M; Ahrens, Dirk
2016-05-05
Defining species units can be challenging, especially during the earliest stages of speciation, when phylogenetic inference and delimitation methods may be compromised by incomplete lineage sorting (ILS) or secondary gene flow. Integrative approaches to taxonomy, which combine molecular and morphological evidence, have the potential to be valuable in such cases. In this study we investigated the South African scarab beetle genus Pleophylla using data collected from 110 individuals of eight putative morphospecies. The dataset included four molecular markers (cox1, 16S, rrnL, ITS1) and morphometric data based on male genital morphology. We applied a suite of molecular and morphological approaches to species delimitation, and implemented a novel Bayesian approach in the software iBPP, which enables continuous morphological trait and molecular data to be combined. Traditional morphology-based species assignments were supported quantitatively by morphometric analyses of the male genitalia (eigenshape analysis, CVA, LDA). While the ITS1-based delineation was also broadly congruent with the morphospecies, the cox1 data resulted in over-splitting (GMYC modelling, haplotype networks, PTP, ABGD). In the most extreme case morphospecies shared identical haplotypes, which may be attributable to ILS based on statistical tests performed using the software JML. We found the strongest support for putative morphospecies based on phylogenetic evidence using the combined approach implemented in iBPP. However, support for putative species was sensitive to the use of alternative guide trees and alternative combinations of priors on the population size (θ) and rootage (τ 0 ) parameters, especially when the analysis was based on molecular or morphological data alone. We demonstrate that continuous morphological trait data can be extremely valuable in assessing competing hypotheses to species delimitation. In particular, we show that the inclusion of morphological data in an integrative Bayesian framework can improve the resolution of inferred species units. However, we also demonstrate that this approach is extremely sensitive to guide tree and prior parameter choice. These parameters should be chosen with caution - if possible - based on independent empirical evidence, or careful sensitivity analyses should be performed to assess the robustness of results. Young species provide exemplars for investigating the mechanisms of speciation and for assessing the performance of tools used to delimit species on the basis of molecular and/or morphological evidence.
Evangelista, Laura; Zattoni, Fabio; Karnes, Robert J; Novara, Giacomo; Lowe, Val
2016-12-01
To provide a systematic review of recently published reports and carry out a meta-analysis on the use of radiolabeled choline PET/computed tomography (CT) as a guide for salvage lymph node dissection (sLND) in prostate cancer patients with biochemical recurrence after primary treatments. Bibliographic database searches, from 2005 to May 2015, including Pubmed, Web of Science, and TripDatabase, were performed to find studies that included only patients who underwent sLND after radiolabeled choline PET/CT alone or in combination with other imaging modalities. For the qualitative assessment, all studies including the selected population were considered. Conversely, for the quantitative assessment, articles were included only if absolute numbers of true positive, true negative, false positive, and false negative test results were available or derivable from the text for lymph node metastases. Reviews, clinical reports, and editorial articles were excluded from analyses. Eighteen studies fulfilled the inclusion criteria and were assessed qualitatively. A total of 750 patients underwent radiolabeled choline (such as C-choline or F-choline) PET/CT before sLND. A quantitative evaluation was performed in nine studies. A patient-based, a lesion-based, and a site-based analysis was carried out in nine, four, and five studies, respectively. The pooled sensitivities were 85.3% [95% confidence interval (CI): 78.5-90.3%], 56.2% (95% CI: 41.6-69.7%), 75.3% (95% CI: 56.6-87.7%), and 63.7% (95% CI: 41-81.6%), respectively, for patient-based, lesion-based, pelvic site-based, and retroperitoneal site-based analysis. The pooled positive predictive values (PPVs) were 75% (95% CI: 68-80.9%), 85.8% (95% CI: 66.8-94.8%), 81.2% (95% CI: 70.1-88.9%), and 75.2% (95% CI: 58.7-86.7%), respectively, in the same analyses. High heterogeneities among the studies were found for sensitivities and PPVs ranging between 61.7-93.3% and 60.6-94.5%, respectively. Radiolabeled choline PET/CT has only a moderate sensitivity for the detection of metastatic lymph nodes in patients who are candidates for sLND, although the pooled PPVs ranged between 75 and 85.8% for all type of subanalyses. The presence of high heterogeneity among the studies should be considered carefully.
Micro-Encapsulated Porphyrins and Phthalocyanines - New Formulations in Photodynamic Therapy
NASA Astrophysics Data System (ADS)
Ion, R. M.
2017-06-01
Photodynamic therapy (PDT), as an innovative method for cancer tretament is based on a concerted action of some drugs, called sensitizers, which generate reactive oxygen species via a photochemical mechanism, leading to cellular necrosis or apoptosis. The present work aims at loading some sensitizers, as porphyrins (P) and phthalocyanines (Pc) into alginate particles. Particles were prepared by dropping alginate into an aqueous solution containing P or Pc and CaCl2, which allows the formation of particles through ionic crosslinking. It was obtained P or Pc loaded alginate beads with an average diameter of about 100 μm. For these systems, this paper analyses the spectroscopic properties, encapsulation into microcapsules, controlled releasing action and their photosensitizer capacity (singlet oxygen generation).
NASA Astrophysics Data System (ADS)
Rajamanickam, Govindaraj; Narendhiran, Santhosh; Muthu, Senthil Pandian; Mukhopadhyay, Sumita; Perumalsamy, Ramasamy
2017-12-01
Titanium dioxide is a promising wide band gap semiconducting material for dye-sensitized solar cell. The poor electron transport properties still remain a challenge with conventional nanoparticles. Here, we synthesized TiO2 nanorods/nanoparticles by hydrothermal method to improve the charge transport properties. The structural and morphological information of the prepared nanorods/nanoparticles was analysed with X-ray diffraction and electron microscopy analysis, respectively. A high power conversion efficiency of 7.7% is achieved with nanorods/nanoparticles employed device under 100 mW/cm2. From the electrochemical impedance analysis, superior electron transport properties have been found for synthesized TiO2 nanorods/nanoparticles employed device than commercial P25 nanoparticles based device.
Optimum sensitivity derivatives of objective functions in nonlinear programming
NASA Technical Reports Server (NTRS)
Barthelemy, J.-F. M.; Sobieszczanski-Sobieski, J.
1983-01-01
The feasibility of eliminating second derivatives from the input of optimum sensitivity analyses of optimization problems is demonstrated. This elimination restricts the sensitivity analysis to the first-order sensitivity derivatives of the objective function. It is also shown that when a complete first-order sensitivity analysis is performed, second-order sensitivity derivatives of the objective function are available at little additional cost. An expression is derived whose application to linear programming is presented.
Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W
2016-09-01
Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cost-Effectiveness of Peer Counselling for the Promotion of Exclusive Breastfeeding in Uganda.
Chola, Lumbwe; Fadnes, Lars T; Engebretsen, Ingunn M S; Nkonki, Lungiswa; Nankabirwa, Victoria; Sommerfelt, Halvor; Tumwine, James K; Tylleskar, Thorkild; Robberstad, Bjarne
2015-01-01
Community based breastfeeding promotion programmes have been shown to be effective in increasing breastfeeding prevalence. However, there is limited data on the cost-effectiveness of these programmes in sub-Saharan Africa. This paper evaluates the cost-effectiveness of a breastfeeding promotion intervention targeting mothers and their 0 to 6 month old children. Data were obtained from a community randomized trial conducted in Uganda between 2006-2008, and supplemented with evidence from several studies in sub-Saharan Africa. In the trial, peer counselling was offered to women in intervention clusters. In the control and intervention clusters, women could access standard health facility breastfeeding promotion services (HFP). Thus, two methods of breastfeeding promotion were compared: community based peer counselling (in addition to HFP) and standard HFP alone. A Markov model was used to calculate incremental cost-effectiveness ratios between the two strategies. The model estimated changes in breastfeeding prevalence and disability adjusted life years. Costs were estimated from a provider perspective. Uncertainty around the results was characterized using one-way sensitivity analyses and a probabilistic sensitivity analysis. Peer counselling more than doubled the breastfeeding prevalence as reported by mothers, but there was no observable impact on diarrhoea prevalence. Estimated incremental cost-effectiveness ratios were US$68 per month of exclusive or predominant breastfeeding and U$11,353 per disability adjusted life year (DALY) averted. The findings were robust to parameter variations in the sensitivity analyses. Our strategy to promote community based peer counselling is unlikely to be cost-effective in reducing diarrhoea prevalence and mortality in Uganda, because its cost per DALY averted far exceeds the commonly assumed willingness-to-pay threshold of three times Uganda's GDP per capita (US$1653). However, since the intervention significantly increases prevalence of exclusive or predominant breastfeeding, it could be adopted in Uganda if benefits other than reducing the occurrence of diarrhoea are believed to be important.
Diagnosis of Fanconi anemia in patients with bone marrow failure
Pinto, Fernando O.; Leblanc, Thierry; Chamousset, Delphine; Le Roux, Gwenaelle; Brethon, Benoit; Cassinat, Bruno; Larghero, Jérôme; de Villartay, Jean-Pierre; Stoppa-Lyonnet, Dominique; Baruchel, André; Socié, Gérard; Gluckman, Eliane; Soulier, Jean
2009-01-01
Background Patients with bone marrow failure and undiagnosed underlying Fanconi anemia may experience major toxicity if given standard-dose conditioning regimens for hematopoietic stem cell transplant. Due to clinical variability and/or potential emergence of genetic reversion with hematopoietic somatic mosaicism, a straightforward Fanconi anemia diagnosis can be difficult to make, and diagnostic strategies combining different assays in addition to classical breakage tests in blood may be needed. Design and Methods We evaluated Fanconi anemia diagnosis on blood lymphocytes and skin fibroblasts from a cohort of 87 bone marrow failure patients (55 children and 32 adults) with no obvious full clinical picture of Fanconi anemia, by performing a combination of chromosomal breakage tests, FANCD2-monoubiquitination assays, a new flow cytometry-based mitomycin C sensitivity test in fibroblasts, and, when Fanconi anemia was diagnosed, complementation group and mutation analyses. The mitomycin C sensitivity test in fibroblasts was validated on control Fanconi anemia and non-Fanconi anemia samples, including other chromosomal instability disorders. Results When this diagnosis strategy was applied to the cohort of bone marrow failure patients, 7 Fanconi anemia patients were found (3 children and 4 adults). Classical chromosomal breakage tests in blood detected 4, but analyses on fibroblasts were necessary to diagnose 3 more patients with hematopoietic somatic mosaicism. Importantly, Fanconi anemia was excluded in all the other patients who were fully evaluated. Conclusions In this large cohort of patients with bone marrow failure our results confirmed that when any clinical/biological suspicion of Fanconi anemia remains after chromosome breakage tests in blood, based on physical examination, history or inconclusive results, then further evaluation including fibroblast analysis should be made. For that purpose, the flow-based mitomycin C sensitivity test here described proved to be a reliable alternative method to evaluate Fanconi anemia phenotype in fibroblasts. This global strategy allowed early and accurate confirmation or rejection of Fanconi anemia diagnosis with immediate clinical impact for those who underwent hematopoietic stem cell transplant. PMID:19278965
Dilla, Tatiana; Alexiou, Dimitra; Chatzitheofilou, Ismini; Ayyub, Ruba; Lowin, Julia; Norrbacka, Kirsi
2017-05-01
Dulaglutide 1.5 mg once weekly is a novel glucagon-like peptide 1 (GLP-1) receptor agonist, for the treatment of type two diabetes mellitus (T2DM). The objective was to estimate the cost-effectiveness of dulaglutide once weekly vs liraglutide 1.8 mg once daily for the treatment of T2DM in Spain in patients with a BMI ≥30 kg/m 2 . The IMS CORE Diabetes Model (CDM) was used to estimate costs and outcomes from the perspective of Spanish National Health System, capturing relevant direct medical costs over a lifetime time horizon. Comparative safety and efficacy data were derived from direct comparison of dulaglutide 1.5 mg vs liraglutide 1.8 mg from the AWARD-6 trial in patients with a body mass index (BMI) ≥30 kg/m 2 . All patients were assumed to remain on treatment for 2 years before switching treatment to basal insulin at a daily dose of 40 IU. One-way sensitivity analyses (OWSA) and probabilistic sensitivity analyses (PSA) were conducted to explore the sensitivity of the model to plausible variations in key parameters and uncertainty of model inputs. Under base case assumptions, dulaglutide 1.5 mg was less costly and more effective vs liraglutide 1.8 mg (total lifetime costs €108,489 vs €109,653; total QALYS 10.281 vs 10.259). OWSA demonstrated that dulaglutide 1.5 mg remained dominant given plausible variations in key input parameters. Results of the PSA were consistent with base case results. Primary limitations of the analysis are common to other cost-effectiveness analyses of chronic diseases like T2DM and include the extrapolation of short-term clinical data to the lifetime time horizon and uncertainty around optimum treatment durations. The model found that dulaglutide 1.5 mg was more effective and less costly than liraglutide 1.8 mg for the treatment of T2DM in Spain. Findings were robust to plausible variations in inputs. Based on these results, dulaglutide may result in cost savings to the Spanish National Health System.
van Leent, Merlijn W J; Stevanović, Jelena; Jansman, Frank G; Beinema, Maarten J; Brouwers, Jacobus R B J; Postma, Maarten J
2015-01-01
Vitamin-K antagonists (VKAs) present an effective anticoagulant treatment in deep venous thrombosis (DVT). However, the use of VKAs is limited because of the risk of bleeding and the necessity of frequent and long-term laboratory monitoring. Therefore, new oral anticoagulant drugs (NOACs) such as dabigatran, with lower rates of (major) intracranial bleeding compared to VKAs and not requiring monitoring, may be considered. To estimate resource utilization and costs of patients treated with the VKAs acenocoumarol and phenprocoumon, for the indication DVT. Furthermore, a formal cost-effectiveness analysis of dabigatran compared to VKAs for DVT treatment was performed, using these estimates. A retrospective observational study design in the thrombotic service of a teaching hospital (Deventer, The Netherlands) was applied to estimate real-world resource utilization and costs of VKA monitoring. A pooled analysis of data from RE-COVER and RE-COVER II on DVT was used to reflect the probabilities for events in the cost-effectiveness model. Dutch costs, utilities and specific data on coagulation monitoring levels were incorporated in the model. Next to the base case analysis, univariate probabilistic sensitivity and scenario analyses were performed. Real-world resource utilization in the thrombotic service of patients treated with VKA for the indication of DVT consisted of 12.3 measurements of the international normalized ratio (INR), with corresponding INR monitoring costs of €138 for a standardized treatment period of 180 days. In the base case, dabigatran treatment compared to VKAs in a cohort of 1,000 DVT patients resulted in savings of €18,900 (95% uncertainty interval (UI) -95,832, 151,162) and 41 (95% UI -18, 97) quality-adjusted life-years (QALYs) gained calculated from societal perspective. The probability that dabigatran is cost-effective at a conservative willingness-to pay threshold of €20,000 per QALY was 99%. Sensitivity and scenario analyses also indicated cost savings or cost-effectiveness below this same threshold. Total INR monitoring costs per patient were estimated at minimally €138. Inserting these real-world data into a cost-effectiveness analysis for patients diagnosed with DVT, dabigatran appeared to be a cost-saving alternative to VKAs in the Netherlands in the base case. Cost savings or favorable cost-effectiveness were robust in sensitivity and scenario analyses. Our results warrant confirmation in other settings and locations.
Preoperative identification of a suspicious adnexal mass: a systematic review and meta-analysis.
Dodge, Jason E; Covens, Allan L; Lacchetti, Christina; Elit, Laurie M; Le, Tien; Devries-Aboud, Michaela; Fung-Kee-Fung, Michael
2012-07-01
To systematically review the existing literature in order to determine the optimal strategy for preoperative identification of the adnexal mass suspicious for ovarian cancer. A review of all systematic reviews and guidelines published between 1999 and 2009 was conducted as a first step. After the identification of a 2004 AHRQ systematic review on the topic, searches of MEDLINE for studies published since 2004 was also conducted to update and supplement the evidentiary base. A bivariate, random-effects meta-regression model was used to produce summary estimates of sensitivity and specificity and to plot summary ROC curves with 95% confidence regions. Four meta-analyses and 53 primary studies were included in this review. The diagnostic performance of each technology was compared and contrasted based on the summary data on sensitivity and specificity obtained from the meta-analysis. Results suggest that 3D ultrasonography has both a higher sensitivity and specificity when compared to 2D ultrasound. Established morphological scoring systems also performed with respectable sensitivity and specificity, each with equivalent diagnostic competence. Explicit scoring systems did not perform as well as other diagnostic testing methods. Assessment of an adnexal mass by colour Doppler technology was neither as sensitive nor as specific as simple ultrasonography. Of the three imaging modalities considered, MRI appeared to perform the best, although results were not statistically different from CT. PET did not perform as well as either MRI or CT. The measurement of the CA-125 tumour marker appears to be less reliable than do other available assessment methods. The best available evidence was collected and included in this rigorous systematic review and meta-analysis. The abundant evidentiary base provided the context and direction for the diagnosis of early-staged ovarian cancer. Copyright © 2012 Elsevier Inc. All rights reserved.
Gray, D T; Weinstein, M C
1998-01-01
Decision and cost-utility analyses considered the tradeoffs of treating patent ductus arteriosus (PDA) using conventional surgery versus transcatheter implantation of the Rashkind occluder. Physicians and informed lay parents assigned utility scores to procedure success/complications combinations seen in prognostically similar pediatric patients with isolated PDA treated from 1982 to 1987. Utility scores multiplied by outcome frequencies from a comparative study generated expected utility values for the two approaches. Cost-utility analyses combined these results with simulated provider cost estimates from 1989. On a 0-100 scale (worst to best observed outcome), the median expected utility for surgery was 99.96, versus 98.88 for the occluder. Results of most sensitivity analyses also slightly favored surgery. Expected utility differences based on 1987 data were minimal. With a mean overall simulated cost of $8,838 vs $12,466 for the occluder, surgery was favored in most cost-utility analyses. Use of the inherently less invasive but less successful, more risky, and more costly occluder approach conferred no apparent net advantage in this study. Analyses of comparable current data would be informative.
Putsathit, Papanin; Morgan, Justin; Bradford, Damien; Engelhardt, Nelly; Riley, Thomas V
2015-02-01
The Becton Dickinson (BD) PCR-based GeneOhm Cdiff assay has demonstrated a high sensitivity and specificity for detecting Clostridium difficile. Recently, the BD Max platform, using the same principles as BD GeneOhm, has become available in Australia. This study aimed to investigate the sensitivity and specificity of BD Max Cdiff assay for the detection of toxigenic C. difficile in an Australian setting. Between December 2013 and January 2014, 406 stool specimens from 349 patients were analysed with the BD Max Cdiff assay. Direct and enrichment toxigenic culture were performed on bioMérieux ChromID C. difficile agar as a reference method. isolates from specimens with discrepant results were further analysed with an in-house PCR to detect the presence of toxin genes. The overall prevalence of toxigenic C. difficile was 7.2%. Concordance between the BD Max assay and enrichment culture was 98.5%. The sensitivity, specificity, positive predictive value and negative predictive value for the BD Max Cdiff assay were 95.5%, 99.0%, 87.5% and 99.7%, respectively, when compared to direct culture, and 91.7%, 99.0%, 88.0% and 99.4%, respectively, when compared to enrichment culture. The new BD Max Cdiff assay appeared to be an excellent platform for rapid and accurate detection of toxigenic C. difficile.
White, J M L; McFadden, J P; White, I R
2008-03-01
Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.
The novel gene tank, a tumor suppressor homolog, regulates ethanol sensitivity in Drosophila.
Devineni, Anita V; Eddison, Mark; Heberlein, Ulrike
2013-05-08
In both mammalian and insect models of ethanol intoxication, high doses of ethanol induce motor impairment and eventually sedation. Sensitivity to the sedative effects of ethanol is inversely correlated with risk for alcoholism. However, the genes regulating ethanol sensitivity are largely unknown. Based on a previous genetic screen in Drosophila for ethanol sedation mutants, we identified a novel gene, tank (CG15626), the homolog of the mammalian tumor suppressor EI24/PIG8, which has a strong role in regulating ethanol sedation sensitivity. Genetic and behavioral analyses revealed that tank acts in the adult nervous system to promote ethanol sensitivity. We localized the function of tank in regulating ethanol sensitivity to neurons within the pars intercerebralis that have not been implicated previously in ethanol responses. We show that acutely manipulating the activity of all tank-expressing neurons, or of pars intercerebralis neurons in particular, alters ethanol sensitivity in a sexually dimorphic manner, since neuronal activation enhanced ethanol sedation in males, but not females. Finally, we provide anatomical evidence that tank-expressing neurons form likely synaptic connections with neurons expressing the neural sex determination factor fruitless (fru), which have been implicated recently in the regulation of ethanol sensitivity. We suggest that a functional interaction with fru neurons, many of which are sexually dimorphic, may account for the sex-specific effect induced by activating tank neurons. Overall, we have characterized a novel gene and corresponding set of neurons that regulate ethanol sensitivity in Drosophila.
The Novel Gene tank, a Tumor Suppressor Homolog, Regulates Ethanol Sensitivity in Drosophila
Eddison, Mark; Heberlein, Ulrike
2013-01-01
In both mammalian and insect models of ethanol intoxication, high doses of ethanol induce motor impairment and eventually sedation. Sensitivity to the sedative effects of ethanol is inversely correlated with risk for alcoholism. However, the genes regulating ethanol sensitivity are largely unknown. Based on a previous genetic screen in Drosophila for ethanol sedation mutants, we identified a novel gene, tank (CG15626), the homolog of the mammalian tumor suppressor EI24/PIG8, which has a strong role in regulating ethanol sedation sensitivity. Genetic and behavioral analyses revealed that tank acts in the adult nervous system to promote ethanol sensitivity. We localized the function of tank in regulating ethanol sensitivity to neurons within the pars intercerebralis that have not been implicated previously in ethanol responses. We show that acutely manipulating the activity of all tank-expressing neurons, or of pars intercerebralis neurons in particular, alters ethanol sensitivity in a sexually dimorphic manner, since neuronal activation enhanced ethanol sedation in males, but not females. Finally, we provide anatomical evidence that tank-expressing neurons form likely synaptic connections with neurons expressing the neural sex determination factor fruitless (fru), which have been implicated recently in the regulation of ethanol sensitivity. We suggest that a functional interaction with fru neurons, many of which are sexually dimorphic, may account for the sex-specific effect induced by activating tank neurons. Overall, we have characterized a novel gene and corresponding set of neurons that regulate ethanol sensitivity in Drosophila. PMID:23658154
Sunflower seeds as eliciting agents of Compositae dermatitis.
Paulsen, Evy; El-Houri, Rime B; Andersen, Klaus E; Christensen, Lars P
2015-03-01
Sunflowers may cause dermatitis because of allergenic sesquiterpene lactones (SLs). Contact sensitization to sunflower seeds has also been reported, but the allergens are unknown. To analyse sunflower seeds for the presence of SLs and to assess the prevalence of sunflower sensitization in Compositae-allergic individuals. Sunflower-sensitive patients were identified by aimed patch testing. A dichloromethane extract of whole sunflower seeds was analysed by liquid chromatography-mass spectrometry and high-performance liquid chromatography. The prevalence of sensitivity to sunflower in Compositae-allergic individuals was 56%. A solvent wash of whole sunflower seeds yielded an extract containing SLs, the principal component tentatively being identified as argophyllin A or B, other SLs being present in minute amounts. The concentration of SLs on the sunflower seeds is considered high enough to elicit dermatitis in sensitive persons, and it seems appropriate to warn Compositae-allergic subjects against handling sunflower seeds. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Magyar, Caroline I; Pandolfi, Vincent; Dill, Charles A
2012-02-01
This study investigated the psychometric properties of the Social Communication Questionnaire (SCQ) in a sample of children with Down syndrome (DS), many of whom had a co-occurring autism spectrum disorder (ASD). The SCQ is a widely used ASD screening measure; however, its measurement properties have not been comprehensively evaluated specifically in children with DS, a group that seems to be at higher risk for an ASD. Exploratory and confirmatory factor analyses, scale reliability, convergent and discriminant correlations, significance tests between groups of children with DS and DS + ASD, and diagnostic accuracy analyses were conducted. Factor analyses identified 2 reliable factors that we labeled Social-Communication and Stereotyped Behavior and Unusual Interests. Pearson correlations with Autism Diagnostic Interview-Revised subscales indicated support for the SCQ's convergent validity and some support for the discriminant validity of the factor-based scales. Significance tests and receiver operating characteristic analyses indicated that children with DS + ASD obtained significantly higher SCQ factor-based and total scores than children with DS alone, and that the SCQ Total Score evidenced good sensitivity and adequate specificity. Results indicated initial psychometric support for the SCQ as an ASD screening measure in children with DS. The SCQ should be considered as part of a multimethod evaluation when screening children with DS.
Le, Long Khanh-Dao; Barendregt, Jan J; Hay, Phillipa; Sawyer, Susan M; Hughes, Elizabeth K; Mihalopoulos, Cathrine
2017-12-01
Anorexia nervosa (AN) is a prevalent, serious mental disorder. We aimed to evaluate the cost-effectiveness of family-based treatment (FBT) compared to adolescent-focused individual therapy (AFT) or no intervention within the Australian healthcare system. A Markov model was developed to estimate the cost and disability-adjusted life-year (DALY) averted of FBT relative to comparators over 6 years from the health system perspective. The target population was 11-18 year olds with AN of relatively short duration. Uncertainty and sensitivity analyses were conducted to test model assumptions. Results are reported as incremental cost-effectiveness ratios (ICER) in 2013 Australian dollars per DALY averted. FBT was less costly than AFT. Relative to no intervention, the mean ICER of FBT and AFT was $5,089 (95% uncertainty interval (UI): dominant to $16,659) and $51,897 ($21,591 to $1,712,491) per DALY averted. FBT and AFT are 100% and 45% likely to be cost-effective, respectively, at a threshold of AUD$50,000 per DALY averted. Sensitivity analyses indicated that excluding hospital costs led to increases in the ICERs but the conclusion of the study did not change. FBT is the most cost-effective among treatment arms, whereas AFT was not cost-effective compared to no intervention. Further research is required to verify this result. © 2017 Wiley Periodicals, Inc.
Brecker, Stephen; Mealing, Stuart; Padhiar, Amie; Eaton, James; Sculpher, Mark; Busca, Rachele; Bosmans, Johan; Gerckens, Ulrich J; Wenaweser, Peter; Tamburino, Corrado; Bleiziffer, Sabine; Piazza, Nicolo; Moat, Neil; Linke, Axel
2014-01-01
Objective To use patient-level data from the ADVANCE study to evaluate the cost-effectiveness of transcatheter aortic valve implantation (TAVI) compared to medical management (MM) in patients with severe aortic stenosis from the perspective of the UK NHS. Methods A published decision-analytic model was adapted to include information on TAVI from the ADVANCE study. Patient-level data informed the choice as well as the form of mathematical functions that were used to model all-cause mortality, health-related quality of life and hospitalisations. TAVI-related resource use protocols were based on the ADVANCE study. MM was modelled on publicly available information from the PARTNER-B study. The outcome measures were incremental cost-effectiveness ratios (ICERs) estimated at a range of time horizons with benefits expressed as quality-adjusted life-years (QALY). Extensive sensitivity/subgroup analyses were undertaken to explore the impact of uncertainty in key clinical areas. Results Using a 5-year time horizon, the ICER for the comparison of all ADVANCE to all PARTNER-B patients was £13 943 per QALY gained. For the subset of ADVANCE patients classified as high risk (Logistic EuroSCORE >20%) the ICER was £17 718 per QALY gained). The ICER was below £30 000 per QALY gained in all sensitivity analyses relating to choice of MM data source and alternative modelling approaches for key parameters. When the time horizon was extended to 10 years, all ICERs generated in all analyses were below £20 000 per QALY gained. Conclusion TAVI is highly likely to be a cost-effective treatment for patients with severe aortic stenosis. PMID:25349700
Brecker, Stephen; Mealing, Stuart; Padhiar, Amie; Eaton, James; Sculpher, Mark; Busca, Rachele; Bosmans, Johan; Gerckens, Ulrich J; Wenaweser, Peter; Tamburino, Corrado; Bleiziffer, Sabine; Piazza, Nicolo; Moat, Neil; Linke, Axel
2014-01-01
To use patient-level data from the ADVANCE study to evaluate the cost-effectiveness of transcatheter aortic valve implantation (TAVI) compared to medical management (MM) in patients with severe aortic stenosis from the perspective of the UK NHS. A published decision-analytic model was adapted to include information on TAVI from the ADVANCE study. Patient-level data informed the choice as well as the form of mathematical functions that were used to model all-cause mortality, health-related quality of life and hospitalisations. TAVI-related resource use protocols were based on the ADVANCE study. MM was modelled on publicly available information from the PARTNER-B study. The outcome measures were incremental cost-effectiveness ratios (ICERs) estimated at a range of time horizons with benefits expressed as quality-adjusted life-years (QALY). Extensive sensitivity/subgroup analyses were undertaken to explore the impact of uncertainty in key clinical areas. Using a 5-year time horizon, the ICER for the comparison of all ADVANCE to all PARTNER-B patients was £13 943 per QALY gained. For the subset of ADVANCE patients classified as high risk (Logistic EuroSCORE >20%) the ICER was £17 718 per QALY gained). The ICER was below £30 000 per QALY gained in all sensitivity analyses relating to choice of MM data source and alternative modelling approaches for key parameters. When the time horizon was extended to 10 years, all ICERs generated in all analyses were below £20 000 per QALY gained. TAVI is highly likely to be a cost-effective treatment for patients with severe aortic stenosis.
Mortality sensitivity in life-stage simulation analysis: A case study of southern sea otters
Gerber, L.R.; Tinker, M.T.; Doak, D.F.; Estes, J.A.; Jessup, David A.
2004-01-01
Currently, there are no generally recognized approaches for linking detailed mortality and pathology data to population-level analyses of extinction risk. We used a combination of analytical and simulation-based analyses to examine 20 years of age- and sex-specific mortality data for southern sea otters (Enhydra lutris), and we applied results to project the efficacy of alternative conservation strategies. Population recovery of the southern sea otter has been slow (rate of population increase ?? = 1.05) compared to other recovering populations (?? = 1.17-1.20), and the population declined (?? = 0.975) between 1995 and 1999. Age-based Leslie matrices were developed to explore explanations for the slow recovery and recent decline in the southern sea other population. An elasticity analysis was performed to predict effects of proportional changes in stage-specific reproductive or survival rates on the rate of population increase. A life-stage simulation analysis (LSA) was developed to evaluate the impact of changing age- and cause-specific mortality rates on ??. The information used to develop these models was derived from death assemblage, pathology, and live population census data to examine the sensitivity of sea otter population growth to different sources of mortality (e.g., disease and starvation, direct human take [fisheries, gun shot, boat strike, oil pollution], mating trauma and intraspecific aggression, shark bites, and unknown). We used resampling simulations to generate random combinations of vital rates for a large number of matrix replicates and drew on these to estimate potential effects of mortality sources on population growth (??). Our analyses suggest management actions that are likely and unlikely to promote recovery of the southern sea otter and more broadly indicate a methodology to better utilize cause-of-death data in conservation decision-making.
Moriwaki, K; Noto, S
2017-02-01
A model-based cost-effectiveness analysis was performed to evaluate the cost-effectiveness of secondary fracture prevention by osteoporosis liaison service (OLS) relative to no therapy in patients with osteoporosis and a history of hip fracture. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. The purpose of this study was to estimate, from the perspective of Japan's healthcare system, the cost-effectiveness of secondary fracture prevention by OLS relative to no therapy in patients with osteoporosis and a history of hip fracture. A patient-level state transition model was developed to predict lifetime costs and quality-adjusted life years (QALYs) in patients with or without secondary fracture prevention by OLS. The incremental cost-effectiveness ratio (ICER) of secondary fracture prevention compared with no therapy was estimated. Sensitivity analyses were performed to examine the influence of parameter uncertainty on the base case results. Compared with no therapy, secondary fracture prevention in patients aged 65 with T-score of -2.5 resulted in an additional lifetime cost of $3396 per person and conferred an additional 0.118 QALY, resulting in an ICER of $28,880 per QALY gained. Deterministic sensitivity analyses showed that treatment duration and offset time strongly affect the cost-effectiveness of OLS. According to the results of scenario analyses, secondary fracture prevention by OLS was cost-saving compared with no therapy in patients with a family history of hip fracture and high alcohol intake. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. In addition, secondary fracture prevention is less expensive than no therapy in high-risk patients with multiple risk factors.
NASA Astrophysics Data System (ADS)
Singh, Gyanender; Terrani, Kurt; Katoh, Yutai
2018-02-01
SiC/SiC composites are considered among leading candidates for accident tolerant fuel cladding in light water reactors. However, when SiC-based materials are exposed to neutron irradiation, they experience significant changes in dimensions and physical properties. Under a large heat flux application (i.e. fuel cladding), the non-uniform changes in the dimensions and physical properties will lead to build-up of stresses in the structure over the course of time. To ensure reliable and safe operation of such a structure it is important to assess its thermo-mechanical performance under in-reactor conditions of irradiation and elevated temperature. In this work, the foundation for 3D thermo-mechanical analysis of SiC/SiC cladding is put in place and a set of analyses with simplified boundary conditions has been performed. The analyses were carried out with two different codes that were benchmarked against one another and prior results in the literature. A constitutive model is constructed and solved numerically to predict the stress distribution and variation in the cladding under normal operating conditions. The dependence of dimensions and physical properties variation with irradiation and temperature has been incorporated. These robust models may now be modified to take into account the axial and circumferential variation in neutron and heat flux to fully account for 3D effects. The results from the simple analyses show the development of high tensile stresses especially in the circumferential and axial directions at the inner region of the cladding. Based on the results obtained, design guidelines are recommended. For lack of certainty in or tailor-ability for the physical and mechanical properties of SiC/SiC composite material a sensitivity analysis is conducted. The analysis results establish a precedence order of the properties based on the extent to which these properties influence the temperature and the stresses.
Cost-effectiveness of CT screening in the National Lung Screening Trial.
Black, William C; Gareen, Ilana F; Soneji, Samir S; Sicks, JoRean D; Keeler, Emmett B; Aberle, Denise R; Naeim, Arash; Church, Timothy R; Silvestri, Gerard A; Gorelick, Jeremy; Gatsonis, Constantine
2014-11-06
The National Lung Screening Trial (NLST) showed that screening with low-dose computed tomography (CT) as compared with chest radiography reduced lung-cancer mortality. We examined the cost-effectiveness of screening with low-dose CT in the NLST. We estimated mean life-years, quality-adjusted life-years (QALYs), costs per person, and incremental cost-effectiveness ratios (ICERs) for three alternative strategies: screening with low-dose CT, screening with radiography, and no screening. Estimations of life-years were based on the number of observed deaths that occurred during the trial and the projected survival of persons who were alive at the end of the trial. Quality adjustments were derived from a subgroup of participants who were selected to complete quality-of-life surveys. Costs were based on utilization rates and Medicare reimbursements. We also performed analyses of subgroups defined according to age, sex, smoking history, and risk of lung cancer and performed sensitivity analyses based on several assumptions. As compared with no screening, screening with low-dose CT cost an additional $1,631 per person (95% confidence interval [CI], 1,557 to 1,709) and provided an additional 0.0316 life-years per person (95% CI, 0.0154 to 0.0478) and 0.0201 QALYs per person (95% CI, 0.0088 to 0.0314). The corresponding ICERs were $52,000 per life-year gained (95% CI, 34,000 to 106,000) and $81,000 per QALY gained (95% CI, 52,000 to 186,000). However, the ICERs varied widely in subgroup and sensitivity analyses. We estimated that screening for lung cancer with low-dose CT would cost $81,000 per QALY gained, but we also determined that modest changes in our assumptions would greatly alter this figure. The determination of whether screening outside the trial will be cost-effective will depend on how screening is implemented. (Funded by the National Cancer Institute; NLST ClinicalTrials.gov number, NCT00047385.).
Sensitivity to Uncertainty in Asteroid Impact Risk Assessment
NASA Astrophysics Data System (ADS)
Mathias, D.; Wheeler, L.; Prabhu, D. K.; Aftosmis, M.; Dotson, J.; Robertson, D. K.
2015-12-01
The Engineering Risk Assessment (ERA) team at NASA Ames Research Center is developing a physics-based impact risk model for probabilistically assessing threats from potential asteroid impacts on Earth. The model integrates probabilistic sampling of asteroid parameter ranges with physics-based analyses of entry, breakup, and impact to estimate damage areas and casualties from various impact scenarios. Assessing these threats is a highly coupled, dynamic problem involving significant uncertainties in the range of expected asteroid characteristics, how those characteristics may affect the level of damage, and the fidelity of various modeling approaches and assumptions. The presented model is used to explore the sensitivity of impact risk estimates to these uncertainties in order to gain insight into what additional data or modeling refinements are most important for producing effective, meaningful risk assessments. In the extreme cases of very small or very large impacts, the results are generally insensitive to many of the characterization and modeling assumptions. However, the nature of the sensitivity can change across moderate-sized impacts. Results will focus on the value of additional information in this critical, mid-size range, and how this additional data can support more robust mitigation decisions.
van Voorn, George A. K.; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover’s distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system. PMID:28196372
Resilience through adaptation.
Ten Broeke, Guus A; van Voorn, George A K; Ligtenberg, Arend; Molenaar, Jaap
2017-01-01
Adaptation of agents through learning or evolution is an important component of the resilience of Complex Adaptive Systems (CAS). Without adaptation, the flexibility of such systems to cope with outside pressures would be much lower. To study the capabilities of CAS to adapt, social simulations with agent-based models (ABMs) provide a helpful tool. However, the value of ABMs for studying adaptation depends on the availability of methodologies for sensitivity analysis that can quantify resilience and adaptation in ABMs. In this paper we propose a sensitivity analysis methodology that is based on comparing time-dependent probability density functions of output of ABMs with and without agent adaptation. The differences between the probability density functions are quantified by the so-called earth-mover's distance. We use this sensitivity analysis methodology to quantify the probability of occurrence of critical transitions and other long-term effects of agent adaptation. To test the potential of this new approach, it is used to analyse the resilience of an ABM of adaptive agents competing for a common-pool resource. Adaptation is shown to contribute positively to the resilience of this ABM. If adaptation proceeds sufficiently fast, it may delay or avert the collapse of this system.
Peretti, Leandro E; Gonzalez, Verónica D G; Marcipar, Iván S; Gugliotta, Luis M
2017-07-01
The aim of this work was to obtain a reagent based on latex particles for ruling out acute toxoplasmosis in pregnant women by immunoagglutination (IA). Latex-protein complexes (LPC) were previously synthesized coupling the recombinant protein of Toxoplasma gondii P22Ag and the homogenate of the parasite to latex particles with different size, chemical functionality and charge density. LPC were tested in IA assays against a panel of 72 pregnant women serum samples. Results were analysed through receiver operating characteristic curves, determining area under the curve (AUC), sensitivity, specificity positive and negative predictive values (PPV and NPV, respectively). It was observed that the antigenicity of proteins was not affected during sensitization by either physical adsorption or covalent coupling. The best results in the sense of maximizing discrimination of low avidity sera from chronic ones were observed for the IA test based on latex particles with carboxyl functionality and the recombinant P22Ag, obtaining an AUC of 0·94, a sensitivity of 100% and a NPV of 100%. In this way, the proposed test could be useful for the toxoplasmosis diagnosis in pregnant women, with the advantages of being cheap, rapid and easy to be implemented.
Srivastava, Amrita; Singh, Anumeha; Singh, Satya S; Mishra, Arun K
2017-04-16
An appreciation of comparative microbial survival is most easily done while evaluating their adaptive strategies during stress. In the present experiment, antioxidative and whole cell proteome variations based on spectrophotometric analysis and SDS-PAGE and 2-dimensional gel electrophoresis have been analysed among salt-tolerant and salt-sensitive Frankia strains. This is the first report of proteomic basis underlying salt tolerance in these newly isolated Frankia strains from Hippophae salicifolia D. Don. Salt-tolerant strain HsIi10 shows higher increment in the contents of superoxide dismutase, catalase and ascorbate peroxidase as compared to salt-sensitive strain HsIi8. Differential 2-DGE profile has revealed differential profiles for salt-tolerant and salt-sensitive strains. Proteomic confirmation of salt tolerance in the strains with inbuilt efficiency of thriving in nitrogen-deficient locales is a definite advantage for these microbes. This would be equally beneficial for improvement of soil nitrogen status. Efficient protein regulation in HsIi10 suggests further exploration for its potential use as biofertilizer in saline soils.
Silva, William P P; Stramandinoli-Zanicotti, Roberta T; Schussel, Juliana L; Ramos, Gyl H A; Ioshi, Sergio O; Sassi, Laurindo M
2016-11-01
Objective: This article concerns evaluation of the sensitivity, specificity and accuracy of FNAB for pre-surgical diagnosis of benign and malignant lesions of major and minor salivary glands of patients treated in the Department of Head and Neck Surgery of Erasto Gartner Hospital. Methods: This retrospective study analyzed medical records from January 2006 to December 2011 from patients with salivary gland lesions who underwent preoperative FNAB and, after surgical excision of the lesion, histopathological examination. Results: The study had a cohort of 130 cases, but 34 cases (26.2%) were considered unsatisfactory regarding cytology analyses. Based on the data, sensitivity was 66.7% (6/9), specificity was 81.6% (71/87), accuracy was 80.2% (77/96), the positive predictive value was 66,7% (6/9) and the negative predictive value was 81.6% (71/87). Conclusion: Despite the high rate of inadequate samples obtained in the FNAB in this study the technique offers high specificity, accuracy and acceptable sensitivity. Creative Commons Attribution License
Modelling of intermittent microwave convective drying: parameter sensitivity
NASA Astrophysics Data System (ADS)
Zhang, Zhijun; Qin, Wenchao; Shi, Bin; Gao, Jingxin; Zhang, Shiwei
2017-06-01
The reliability of the predictions of a mathematical model is a prerequisite to its utilization. A multiphase porous media model of intermittent microwave convective drying is developed based on the literature. The model considers the liquid water, gas and solid matrix inside of food. The model is simulated by COMSOL software. Its sensitivity parameter is analysed by changing the parameter values by ±20%, with the exception of several parameters. The sensitivity analysis of the process of the microwave power level shows that each parameter: ambient temperature, effective gas diffusivity, and evaporation rate constant, has significant effects on the process. However, the surface mass, heat transfer coefficient, relative and intrinsic permeability of the gas, and capillary diffusivity of water do not have a considerable effect. The evaporation rate constant has minimal parameter sensitivity with a ±20% value change, until it is changed 10-fold. In all results, the temperature and vapour pressure curves show the same trends as the moisture content curve. However, the water saturation at the medium surface and in the centre show different results. Vapour transfer is the major mass transfer phenomenon that affects the drying process.
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Pilon, Dominic; Queener, Marykay; Lefebvre, Patrick; Ellis, Lorie A
2016-08-01
To calculate costs per median overall survival (OS) month in chemotherapy-naïve patients with metastatic castration-resistant prostate cancer (mCRPC) treated with abiraterone acetate plus prednisone (AA + P) or enzalutamide. Median treatment duration and median OS data from published Phase 3 clinical trials and prescribing information were used to calculate costs per median OS month based on wholesale acquisition costs (WACs) for patients with mCRPC treated with AA + P or enzalutamide. Sensitivity analyses were performed to understand how variations in treatment duration and treatment-related monitoring recommendations influenced cost per median OS month. Cost-effectiveness estimates of other Phase 3 trial outcomes were also explored: cost per month of chemotherapy avoided and per median radiographic progression-free survival (rPFS) month. The results demonstrated that AA + P has a lower cost per median OS month than enzalutamide ($3231 vs 4512; 28% reduction), based on the following assumptions: median treatment duration of 14 months for AA + P and 18 months for enzalutamide, median OS of 34.7 months for AA + P and 35.3 months for enzalutamide, and WAC per 30-day supply of $8007.17 for AA + P vs $8847.98 for enzalutamide. Sensitivity analyses showed that accounting for recommended treatment-related monitoring costs or assuming identical treatment durations for AA + P and enzalutamide (18 months) resulted in costs per median OS month 8-27% lower for AA + P than for enzalutamide. Costs per month of chemotherapy avoided were $4448 for AA + P and $5688 for enzalutamide, while costs per month to achieve median rPFS were $6794 for AA + P and $7963 for enzalutamide. This cost-effectiveness analysis demonstrated that costs per median OS month, along with costs of other Phase 3 trial outcomes, were lower for AA + P than for enzalutamide. The findings were robust to sensitivity analyses. These results have important implications for population health decision-makers evaluating the relative value of therapies for mCRPC patients.
Gaziano, Thomas A; Fonarow, Gregg C; Claggett, Brian; Chan, Wing W; Deschaseaux-Voinet, Celine; Turner, Stuart J; Rouleau, Jean L; Zile, Michael R; McMurray, John J V; Solomon, Scott D
2016-09-01
The angiotensin receptor neprilysin inhibitor sacubitril/valsartan was associated with a reduction in cardiovascular mortality, all-cause mortality, and hospitalizations compared with enalapril. Sacubitril/valsartan has been approved for use in heart failure (HF) with reduced ejection fraction in the United States and cost has been suggested as 1 factor that will influence the use of this agent. To estimate the cost-effectiveness of sacubitril/valsartan vs enalapril in the United States. Data from US adults (mean [SD] age, 63.8 [11.5] years) with HF with reduced ejection fraction and characteristics similar to those in the PARADIGM-HF trial were used as inputs for a 2-state Markov model simulated HF. Risks of all-cause mortality and hospitalization from HF or other reasons were estimated with a 30-year time horizon. Quality of life was based on trial EQ-5D scores. Hospital costs combined Medicare and private insurance reimbursement rates; medication costs included the wholesale acquisition cost for sacubitril/valsartan and enalapril. A discount rate of 3% was used. Sensitivity analyses were performed on key inputs including: hospital costs, mortality benefit, hazard ratio for hospitalization reduction, drug costs, and quality-of-life estimates. Hospitalizations, quality-adjusted life-years (QALYs), costs, and incremental costs per QALY gained. The 2-state Markov model of US adult patients (mean age, 63.8 years) calculated that there would be 220 fewer hospital admissions per 1000 patients with HF treated with sacubitril/valsartan vs enalapril over 30 years. The incremental costs and QALYs gained with sacubitril/valsartan treatment were estimated at $35 512 and 0.78, respectively, compared with enalapril, equating to an incremental cost-effectiveness ratio (ICER) of $45 017 per QALY for the base-case. Sensitivity analyses demonstrated ICERs ranging from $35 357 to $75 301 per QALY. For eligible patients with HF with reduced ejection fraction, the Markov model calculated that sacubitril/valsartan would increase life expectancy at an ICER consistent with other high-value accepted cardiovascular interventions. Sensitivity analyses demonstrated sacubitril/valsartan would remain cost-effective vs enalapril.
Park, Douglas L; Coates, Scott; Brewer, Vickery A; Garber, Eric A E; Abouzied, Mohamed; Johnson, Kurt; Ritter, Bruce; McKenzie, Deborah
2005-01-01
Performance Tested Method multiple laboratory validations for the detection of peanut protein in 4 different food matrixes were conducted under the auspices of the AOAC Research Institute. In this blind study, 3 commercially available ELISA test kits were validated: Neogen Veratox for Peanut, R-Biopharm RIDASCREEN FAST Peanut, and Tepnel BioKits for Peanut Assay. The food matrixes used were breakfast cereal, cookies, ice cream, and milk chocolate spiked at 0 and 5 ppm peanut. Analyses of the samples were conducted by laboratories representing industry and international and U.S governmental agencies. All 3 commercial test kits successfully identified spiked and peanut-free samples. The validation study required 60 analyses on test samples at the target level 5 microg peanut/g food and 60 analyses at a peanut-free level, which was designed to ensure that the lower 95% confidence limit for the sensitivity and specificity would not be <90%. The probability that a test sample contains an allergen given a prevalence rate of 5% and a positive test result using a single test kit analysis with 95% sensitivity and 95% specificity, which was demonstrated for these test kits, would be 50%. When 2 test kits are run simultaneously on all samples, the probability becomes 95%. It is therefore recommended that all field samples be analyzed with at least 2 of the validated kits.
Mainou, Maria; Madenidou, Anastasia-Vasiliki; Liakos, Aris; Paschos, Paschalis; Karagiannis, Thomas; Bekiari, Eleni; Vlachaki, Efthymia; Wang, Zhen; Murad, Mohammad Hassan; Kumar, Shaji; Tsapas, Apostolos
2017-06-01
We performed a systematic review and meta-regression analysis of randomized control trials to investigate the association between response to initial treatment and survival outcomes in patients with newly diagnosed multiple myeloma (MM). Response outcomes included complete response (CR) and the combined outcome of CR or very good partial response (VGPR), while survival outcomes were overall survival (OS) and progression-free survival (PFS). We used random-effect meta-regression models and conducted sensitivity analyses based on definition of CR and study quality. Seventy-two trials were included in the systematic review, 63 of which contributed data in meta-regression analyses. There was no association between OS and CR in patients without autologous stem cell transplant (ASCT) (regression coefficient: .02, 95% confidence interval [CI] -0.06, 0.10), in patients undergoing ASCT (-.11, 95% CI -0.44, 0.22) and in trials comparing ASCT with non-ASCT patients (.04, 95% CI -0.29, 0.38). Similarly, OS did not correlate with the combined metric of CR or VGPR, and no association was evident between response outcomes and PFS. Sensitivity analyses yielded similar results. This meta-regression analysis suggests that there is no association between conventional response outcomes and survival in patients with newly diagnosed MM. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Cost-effectiveness of chronic hepatitis C treatment with thymosin alpha-1.
García-Contreras, Fernando; Nevárez-Sida, Armando; Constantino-Casas, Patricia; Abud-Bastida, Fernando; Garduño-Espinosa, Juan
2006-07-01
More than one million individuals in Mexico are infected with hepatitis C virus (HCV), and 80% are at risk for developing a chronic infection that could lead to hepatic cirrhosis and other complications that impact quality of life and institutional costs. The objective of the study was to determine the most cost-effective treatment against HCV among the following: peginterferon, peginterferon plus ribavirin, peginterferon plus ribavirin plus thymosin, and no treatment. We carried out cost-effectiveness analysis using the institutional perspective, including a 45-year time frame and a 3% discount rate for costs and effectiveness. We employed a Bayesian-focused decision tree and a Markov model. One- and two-way sensitivity analyses were performed, as well as threshold-oriented and probabilistic analyses, and we obtained acceptability curves and net health benefits. Triple therapy (peginterferon plus ribavirin plus thymosin alpha-1) was dominant with lower cost and higher utility in relationship with peginterferon + ribavirin option, peginterferon alone and no-treatment option. In triple therapy the cost per unit of success was of 1,908 [USD/quality-adjusted life years (QALY)] compared with peginterferon plus ribavirin 2,277/QALY, peginterferon alone 2,929/QALY, and no treatment 4,204/QALY. Sensitivity analyses confirmed the robustness of the base case. Peginterferon plus ribavirin plus thymosin alpha-1 option was dominant (lowest cost and highest effectiveness). Using no drug was the most expensive and least effective option.
Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M. Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan
2016-01-01
Abstract Objective: The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Study sample: Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. Design: LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. Results: The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R2 = 0.40). Conclusions: All LEVEL 2 factors are important theoretically as well as for clinical assessment. PMID:27589015
Rönnberg, Jerker; Lunner, Thomas; Ng, Elaine Hoi Ning; Lidestam, Björn; Zekveld, Adriana Agatha; Sörqvist, Patrik; Lyxell, Björn; Träff, Ulf; Yumba, Wycliffe; Classon, Elisabet; Hällgren, Mathias; Larsby, Birgitta; Signoret, Carine; Pichora-Fuller, M Kathleen; Rudner, Mary; Danielsson, Henrik; Stenfelt, Stefan
2016-11-01
The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). All LEVEL 2 factors are important theoretically as well as for clinical assessment.
A methodological investigation of hominoid craniodental morphology and phylogenetics.
Bjarnason, Alexander; Chamberlain, Andrew T; Lockwood, Charles A
2011-01-01
The evolutionary relationships of extant great apes and humans have been largely resolved by molecular studies, yet morphology-based phylogenetic analyses continue to provide conflicting results. In order to further investigate this discrepancy we present bootstrap clade support of morphological data based on two quantitative datasets, one dataset consisting of linear measurements of the whole skull from 5 hominoid genera and the second dataset consisting of 3D landmark data from the temporal bone of 5 hominoid genera, including 11 sub-species. Using similar protocols for both datasets, we were able to 1) compare distance-based phylogenetic methods to cladistic parsimony of quantitative data converted into discrete character states, 2) vary outgroup choice to observe its effect on phylogenetic inference, and 3) analyse male and female data separately to observe the effect of sexual dimorphism on phylogenies. Phylogenetic analysis was sensitive to methodological decisions, particularly outgroup selection, where designation of Pongo as an outgroup and removal of Hylobates resulted in greater congruence with the proposed molecular phylogeny. The performance of distance-based methods also justifies their use in phylogenetic analysis of morphological data. It is clear from our analyses that hominoid phylogenetics ought not to be used as an example of conflict between the morphological and molecular, but as an example of how outgroup and methodological choices can affect the outcome of phylogenetic analysis. Copyright © 2010 Elsevier Ltd. All rights reserved.
Kimura, L; Angeli, C B; Auricchio, M T B M; Fernandes, G R; Pereira, A C; Vicente, J P; Pereira, T V; Mingroni-Netto, R C
2012-01-01
Background. It has been widely suggested that analyses considering multilocus effects would be crucial to characterize the relationship between gene variability and essential hypertension (EH). Objective. To test for the presence of multilocus effects between/among seven polymorphisms (six genes) on blood pressure-related traits in African-derived semi-isolated Brazilian populations (quilombos). Methods. Analyses were carried out using a family-based design in a sample of 652 participants (97 families). Seven variants were investigated: ACE (rs1799752), AGT (rs669), ADD2 (rs3755351), NOS3 (rs1799983), GNB3 (rs5441 and rs5443), and GRK4 (rs1801058). Sensitivity analyses were further performed under a case-control design with unrelated participants only. Results. None of the investigated variants were associated individually with both systolic and diastolic BP levels (SBP and DBP, respectively) or EH (as a binary outcome). Multifactor dimensionality reduction-based techniques revealed a marginal association of the combined effect of both GNB3 variants on DBP levels in a family-based design (P = 0.040), whereas a putative NOS3-GRK4 interaction also in relation to DBP levels was observed in the case-control design only (P = 0.004). Conclusion. Our results provide limited support for the hypothesis of multilocus effects between/among the studied variants on blood pressure in quilombos. Further larger studies are needed to validate our findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ho, Clifford Kuofei
Chemical transport through human skin can play a significant role in human exposure to toxic chemicals in the workplace, as well as to chemical/biological warfare agents in the battlefield. The viability of transdermal drug delivery also relies on chemical transport processes through the skin. Models of percutaneous absorption are needed for risk-based exposure assessments and drug-delivery analyses, but previous mechanistic models have been largely deterministic. A probabilistic, transient, three-phase model of percutaneous absorption of chemicals has been developed to assess the relative importance of uncertain parameters and processes that may be important to risk-based assessments. Penetration routes through the skinmore » that were modeled include the following: (1) intercellular diffusion through the multiphase stratum corneum; (2) aqueous-phase diffusion through sweat ducts; and (3) oil-phase diffusion through hair follicles. Uncertainty distributions were developed for the model parameters, and a Monte Carlo analysis was performed to simulate probability distributions of mass fluxes through each of the routes. Sensitivity analyses using stepwise linear regression were also performed to identify model parameters that were most important to the simulated mass fluxes at different times. This probabilistic analysis of percutaneous absorption (PAPA) method has been developed to improve risk-based exposure assessments and transdermal drug-delivery analyses, where parameters and processes can be highly uncertain.« less
Salvatore, Stefania; Bramness, Jørgen G; Røislien, Jo
2016-07-12
Wastewater-based epidemiology (WBE) is a novel approach in drug use epidemiology which aims to monitor the extent of use of various drugs in a community. In this study, we investigate functional principal component analysis (FPCA) as a tool for analysing WBE data and compare it to traditional principal component analysis (PCA) and to wavelet principal component analysis (WPCA) which is more flexible temporally. We analysed temporal wastewater data from 42 European cities collected daily over one week in March 2013. The main temporal features of ecstasy (MDMA) were extracted using FPCA using both Fourier and B-spline basis functions with three different smoothing parameters, along with PCA and WPCA with different mother wavelets and shrinkage rules. The stability of FPCA was explored through bootstrapping and analysis of sensitivity to missing data. The first three principal components (PCs), functional principal components (FPCs) and wavelet principal components (WPCs) explained 87.5-99.6 % of the temporal variation between cities, depending on the choice of basis and smoothing. The extracted temporal features from PCA, FPCA and WPCA were consistent. FPCA using Fourier basis and common-optimal smoothing was the most stable and least sensitive to missing data. FPCA is a flexible and analytically tractable method for analysing temporal changes in wastewater data, and is robust to missing data. WPCA did not reveal any rapid temporal changes in the data not captured by FPCA. Overall the results suggest FPCA with Fourier basis functions and common-optimal smoothing parameter as the most accurate approach when analysing WBE data.
Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...
Characterization of uncertainty and sensitivity of model parameters is an essential and often overlooked facet of hydrological modeling. This paper introduces an algorithm called MOESHA that combines input parameter sensitivity analyses with a genetic algorithm calibration routin...
Clinical applications of MS-based protein quantification.
Sabbagh, Bassel; Mindt, Sonani; Neumaier, Michael; Findeisen, Peter
2016-04-01
Mass spectrometry-based assays are increasingly important in clinical laboratory medicine and nowadays are already commonly used in several areas of routine diagnostics. These include therapeutic drug monitoring, toxicology, endocrinology, pediatrics, and microbiology. Accordingly, some of the most common analyses are therapeutic drug monitoring of immunosuppressants, vitamin D, steroids, newborn screening, and bacterial identification. However, MS-based quantification of peptides and proteins for routine diagnostic use is rather rare up to now despite excellent analytical specificity and good sensitivity. Here, we want to give an overview over current fit-for-purpose assays for MS-based protein quantification. Advantages as well as challenges of this approach will be discussed with focus on feasibility for routine diagnostic use. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Meher, J K; Meher, P K; Dash, G N; Raval, M K
2012-01-01
The first step in gene identification problem based on genomic signal processing is to convert character strings into numerical sequences. These numerical sequences are then analysed spectrally or using digital filtering techniques for the period-3 peaks, which are present in exons (coding areas) and absent in introns (non-coding areas). In this paper, we have shown that single-indicator sequences can be generated by encoding schemes based on physico-chemical properties. Two new methods are proposed for generating single-indicator sequences based on hydration energy and dipole moments. The proposed methods produce high peak at exon locations and effectively suppress false exons (intron regions having greater peak than exon regions) resulting in high discriminating factor, sensitivity and specificity.
Sadeghipour, F; Veuthey, J L
1997-11-07
A rapid, sensitive and selective liquid chromatographic method with fluorimetric detection was developed for the separation and quantification of four methylenedioxylated amphetamines without interference of other drugs of abuse and common substances found in illicit tablets. The method was validated by examining linearity, precision and accuracy as well as detection and quantification limits. Methylenedioxylated amphetamines were quantified in eight tablets from illicit drug seizures and results were quantitatively compared to HPLC-UV analyses. To demonstrate the better sensitivity of the fluorimetric detection, methylenedioxylated amphetamines were analyzed in serum after a liquid-liquid extraction procedure and results were also compared to HPLC-UV analyses.
Redfield, Robert R; Scalea, Joseph R; Zens, Tiffany J; Mandelbrot, Didier A; Leverson, Glen; Kaufman, Dixon B; Djamali, Arjang
2016-10-01
We sought to determine whether the mode of sensitization in highly sensitized patients contributed to kidney allograft survival. An analysis of the United Network for Organ Sharing dataset involving all kidney transplants between 1997 and 2014 was undertaken. Highly sensitized adult kidney transplant recipients [panel reactive antibody (PRA) ≥98%] were compared with adult, primary non-sensitized and re-transplant recipients. Kaplan-Meier survival analyses were used to determine allograft survival rates. Cox proportional hazards regression analyses were conducted to determine the association of graft loss with key predictors. Fifty-three percent of highly sensitized patients transplanted were re-transplants. Pregnancy and transfusion were the only sensitizing event in 20 and 5%, respectively. The 10-year actuarial graft survival for highly sensitized recipients was 43.9% compared with 52.4% for non-sensitized patients, P < 0.001. The combination of being highly sensitized by either pregnancy or blood transfusion increased the risk of graft loss by 23% [hazard ratio (HR) 1.230, confidence interval (CI) 1.150-1.315, P < 0.001], and the combination of being highly sensitized from a prior transplant increased the risk of graft loss by 58.1% (HR 1.581, CI 1.473-1.698, P < 0.001). The mode of sensitization predicts graft survival in highly sensitized kidney transplant recipients (PRA ≥98%). Patients who are highly sensitized from re-transplants have inferior graft survival compared with patients who are highly sensitized from other modes of sensitization. © The Author 2016. Published by Oxford University Press on behalf of ERA-EDTA. All rights reserved.
Ng, Tze Pin; Feng, Lei; Lim, Wee Shiong; Chong, Mei Sian; Lee, Tih Shih; Yap, Keng Bee; Tsoi, Tung; Liew, Tau Ming; Gao, Qi; Collinson, Simon; Kandiah, Nagaendran; Yap, Philip
2015-01-01
The Montreal Cognitive Assessment (MoCA) was developed as a screening instrument for mild cognitive impairment (MCI). We evaluated the MoCA's test performance by educational groups among older Singaporean Chinese adults. The MoCA and Mini-Mental State Examination (MMSE) were evaluated in two independent studies (clinic-based sample and community-based sample) of MCI and normal cognition (NC) controls, using receiver operating characteristic curve analyses: area under the curve (AUC), sensitivity (Sn), and specificity (Sp). The MoCA modestly discriminated MCI from NC in both study samples (AUC = 0.63 and 0.65): Sn = 0.64 and Sp = 0.36 at a cut-off of 28/29 in the clinic-based sample, and Sn = 0.65 and Sp = 0.55 at a cut-off of 22/23 in the community-based sample. The MoCA's test performance was least satisfactory in the highest (>6 years) education group: AUC = 0.50 (p = 0.98), Sn = 0.54, and Sp = 0.51 at a cut-off of 27/28. Overall, the MoCA's test performance was not better than that of the MMSE. In multivariate analyses controlling for age and gender, MCI diagnosis was associated with a <1-point decrement in MoCA score (η(2) = 0.010), but lower (1-6 years) and no education was associated with a 3- to 5-point decrement (η(2) = 0.115 and η(2) = 0.162, respectively). The MoCA's ability to discriminate MCI from NC was modest in this Chinese population, because it was far more sensitive to the effect of education than MCI diagnosis. © 2015 S. Karger AG, Basel.
A Risk-Based Approach for Aerothermal/TPS Analysis and Testing
NASA Technical Reports Server (NTRS)
Wright, Michael J.; Grinstead, Jay H.; Bose, Deepak
2007-01-01
The current status of aerothermal and thermal protection system modeling for civilian entry missions is reviewed. For most such missions, the accuracy of our simulations is limited not by the tools and processes currently employed, but rather by reducible deficiencies in the underlying physical models. Improving the accuracy of and reducing the uncertainties in these models will enable a greater understanding of the system level impacts of a particular thermal protection system and of the system operation and risk over the operational life of the system. A strategic plan will be laid out by which key modeling deficiencies can be identified via mission-specific gap analysis. Once these gaps have been identified, the driving component uncertainties are determined via sensitivity analyses. A Monte-Carlo based methodology is presented for physics-based probabilistic uncertainty analysis of aerothermodynamics and thermal protection system material response modeling. These data are then used to advocate for and plan focused testing aimed at reducing key uncertainties. The results of these tests are used to validate or modify existing physical models. Concurrently, a testing methodology is outlined for thermal protection materials. The proposed approach is based on using the results of uncertainty/sensitivity analyses discussed above to tailor ground testing so as to best identify and quantify system performance and risk drivers. A key component of this testing is understanding the relationship between the test and flight environments. No existing ground test facility can simultaneously replicate all aspects of the flight environment, and therefore good models for traceability to flight are critical to ensure a low risk, high reliability thermal protection system design. Finally, the role of flight testing in the overall thermal protection system development strategy is discussed.
Granados-García, Víctor; Contreras, Ana M; García-Peña, Carmen; Salinas-Escudero, Guillermo; Thein, Hla-Hla; Flores, Yvonne N
2016-01-01
We conducted a cost-effectiveness analysis of seven hepatitis C virus (HCV) testing strategies in blood donors. Three of the seven strategies were based on HCV diagnosis and reporting guidelines in Mexico and four were from previous and current recommendations outlined by the CDC. The strategies that were evaluated determine antibody levels according to the signal-to-cut-off (S/CO) ratio and use reflex Immunoblot (IMB) or HCV RNA tests to confirm true positive (TP) cases of chronic HCV infection. Costs were calculated from the perspective of the Mexican Institute of Social Security (IMSS). A decision tree model was developed to estimate the expected number of true positive cases and costs for the base-case scenarios and for the sensitivity analyses. Base-case findings indicate an extended dominance of the CDC-USA2 and CDC-USA4 options by the IMSS Mexico3 and IMSS-Mexico1 alternatives. The probabilistic sensitivity analyses results suggest that for a willingness-to-pay (WTP) range of $0-9,000 USD the IMSS-Mexico1 strategy is the most cost-effective of all strategies ($5,000 USD per TP). The IMSS-Mexico3, IMSS-Mexico2, and CDC-USA3 strategies are also cost-effective strategies that cost between $7,800 and $8,800 USD per TP case detected. The CDC-USA1 strategy was very expensive and not cost-effective. HCV antibody testing strategies based on the classification of two or three levels of the S/CO are cost-effective procedures to identify patients who require reflex IMB or HCV RNA testing to confirm chronic HCV infection.
Synek, Alexander; Pahr, Dieter H
2018-06-01
A micro-finite element-based method to estimate the bone loading history based on bone architecture was recently presented in the literature. However, a thorough investigation of the parameter sensitivity and plausibility of this method to predict joint loads is still missing. The goals of this study were (1) to analyse the parameter sensitivity of the joint load predictions at one proximal femur and (2) to assess the plausibility of the results by comparing load predictions of ten proximal femora to in vivo hip joint forces measured with instrumented prostheses (available from www.orthoload.com ). Joint loads were predicted by optimally scaling the magnitude of four unit loads (inclined [Formula: see text] to [Formula: see text] with respect to the vertical axis) applied to micro-finite element models created from high-resolution computed tomography scans ([Formula: see text]m voxel size). Parameter sensitivity analysis was performed by varying a total of nine parameters and showed that predictions of the peak load directions (range 10[Formula: see text]-[Formula: see text]) are more robust than the predicted peak load magnitudes (range 2344.8-4689.5 N). Comparing the results of all ten femora with the in vivo loading data of ten subjects showed that peak loads are plausible both in terms of the load direction (in vivo: [Formula: see text], predicted: [Formula: see text]) and magnitude (in vivo: [Formula: see text], predicted: [Formula: see text]). Overall, this study suggests that micro-finite element-based joint load predictions are both plausible and robust in terms of the predicted peak load direction, but predicted load magnitudes should be interpreted with caution.
Wade, James H; Bailey, Ryan C
2014-01-07
Refractive index-based sensors offer attractive characteristics as nondestructive and universal detectors for liquid chromatographic separations, but a small dynamic range and sensitivity to minor thermal perturbations limit the utility of commercial RI detectors for many potential applications, especially those requiring the use of gradient elutions. As such, RI detectors find use almost exclusively in sample abundant, isocratic separations when interfaced with high-performance liquid chromatography. Silicon photonic microring resonators are refractive index-sensitive optical devices that feature good sensitivity and tremendous dynamic range. The large dynamic range of microring resonators allows the sensors to function across a wide spectrum of refractive indices, such as that encountered when moving from an aqueous to organic mobile phase during a gradient elution, a key analytical advantage not supported in commercial RI detectors. Microrings are easily configured into sensor arrays, and chip-integrated control microrings enable real-time corrections of thermal drift. Thermal controls allow for analyses at any temperature and, in the absence of rigorous temperature control, obviates extended detector equilibration wait times. Herein, proof of concept isocratic and gradient elution separations were performed using well-characterized model analytes (e.g., caffeine, ibuprofen) in both neat buffer and more complex sample matrices. These experiments demonstrate the ability of microring arrays to perform isocratic and gradient elutions under ambient conditions, avoiding two major limitations of commercial RI-based detectors and maintaining comparable bulk RI sensitivity. Further benefit may be realized in the future through selective surface functionalization to impart degrees of postcolumn (bio)molecular specificity at the detection phase of a separation. The chip-based and microscale nature of microring resonators also make it an attractive potential detection technology that could be integrated within lab-on-a-chip and microfluidic separation devices.
Sensitivity of planetary cruise navigation to earth orientation calibration errors
NASA Technical Reports Server (NTRS)
Estefan, J. A.; Folkner, W. M.
1995-01-01
A detailed analysis was conducted to determine the sensitivity of spacecraft navigation errors to the accuracy and timeliness of Earth orientation calibrations. Analyses based on simulated X-band (8.4-GHz) Doppler and ranging measurements acquired during the interplanetary cruise segment of the Mars Pathfinder heliocentric trajectory were completed for the nominal trajectory design and for an alternative trajectory with a longer transit time. Several error models were developed to characterize the effect of Earth orientation on navigational accuracy based on current and anticipated Deep Space Network calibration strategies. The navigational sensitivity of Mars Pathfinder to calibration errors in Earth orientation was computed for each candidate calibration strategy with the Earth orientation parameters included as estimated parameters in the navigation solution. In these cases, the calibration errors contributed 23 to 58% of the total navigation error budget, depending on the calibration strategy being assessed. Navigation sensitivity calculations were also performed for cases in which Earth orientation calibration errors were not adjusted in the navigation solution. In these cases, Earth orientation calibration errors contributed from 26 to as much as 227% of the total navigation error budget. The final analysis suggests that, not only is the method used to calibrate Earth orientation vitally important for precision navigation of Mars Pathfinder, but perhaps equally important is the method for inclusion of the calibration errors in the navigation solutions.
A methodology to estimate uncertainty for emission projections through sensitivity analysis.
Lumbreras, Julio; de Andrés, Juan Manuel; Pérez, Javier; Borge, Rafael; de la Paz, David; Rodríguez, María Encarnación
2015-04-01
Air pollution abatement policies must be based on quantitative information on current and future emissions of pollutants. As emission projections uncertainties are inevitable and traditional statistical treatments of uncertainty are highly time/resources consuming, a simplified methodology for nonstatistical uncertainty estimation based on sensitivity analysis is presented in this work. The methodology was applied to the "with measures" scenario for Spain, concretely over the 12 highest emitting sectors regarding greenhouse gas and air pollutants emissions. Examples of methodology application for two important sectors (power plants, and agriculture and livestock) are shown and explained in depth. Uncertainty bands were obtained up to 2020 by modifying the driving factors of the 12 selected sectors and the methodology was tested against a recomputed emission trend in a low economic-growth perspective and official figures for 2010, showing a very good performance. A solid understanding and quantification of uncertainties related to atmospheric emission inventories and projections provide useful information for policy negotiations. However, as many of those uncertainties are irreducible, there is an interest on how they could be managed in order to derive robust policy conclusions. Taking this into account, a method developed to use sensitivity analysis as a source of information to derive nonstatistical uncertainty bands for emission projections is presented and applied to Spain. This method simplifies uncertainty assessment and allows other countries to take advantage of their sensitivity analyses.
The role of gas heat pumps in electric DSM
NASA Astrophysics Data System (ADS)
Fulmer, M.; Hughes, P. J.
1993-05-01
Natural gas-fired heat pumps (GHP's), an emerging technology, may offer environmental economic, and energy benefits relative to standard and advanced space conditioning equipment now on the market. This paper describes an analysis of GHP's for residential space heating, and cooling relative to major competing technologies under an Integrated Resource (IRP) framework. Our study models a hypothetical GHP rebate program using conditions typical of the Great Lakes region. The analysis is performed for a base scenario with sensitivity cases. In the base scenario, the GHP program is cost-effective according to the societal test, total resource cost test (TRC), and the participant test, but is not cost-effective according to the non-participant test. The sensitivity analyses indicate that the results for the TRC test are most sensitive to the season in which electric demand peaks and the technology against which the GHP's are competing, and are less sensitive to changes in the marginal administrative costs. The modeled GHP program would save 900 million kWh over the life of the program and reduce peak load by about 100 MW in winter and about 135 MW in summer. We estimate all of the GHP's in service (both GHP's of program participants and nonparticipants) in the case study region would save 1,900 million kWh and reduce summer peak load by over 350 MW.
Cost-effectiveness of 3-month paliperidone treatment for chronic schizophrenia in Spain.
Einarson, Thomas R; Bereza, Basil G; Garcia Llinares, Ignacio; González Martín Moro, Beatriz; Tedouri, Fadi; Van Impe, Kristel
2017-10-01
A 3-month long treatment of paliperidone palmitate (PP3M) has been introduced as an option for treating schizophrenia. Its cost-effectiveness in Spain has not been established. To compare the costs and effects of PP3M compared with once-monthly paliperidone (PP1M) from the payer perspective in Spain. This study used the recently published trial by Savitz et al. as a core model over 1 year. Additional data were derived from the literature. Costs in 2016 Euros were obtained from official lists and utilities from Osborne et al. The authors conducted both cost-utility and cost-effectiveness analyses. For the former, the incremental cost per quality-adjusted life-year (QALY) gained was calculated. For the latter, the outcomes were relapses and hospitalizations avoided. To assure the robustness of the analyses, a series of 1-way and probability sensitivity analyses were conducted. The expected cost was lower with PP3M (4,780€) compared with PP1M (5,244€). PP3M had the fewest relapses (0.080 vs 0.161), hospitalizations (0.034 v.s 0.065), and emergency room visits (0.045 v.s 0.096) and the most QALYs (0.677 v.s 0.625). In both cost-effectiveness and cost-utility analyses, PP3M dominated PP1M. Sensitivity analyses confirmed base case findings. For the primary analysis (cost-utility), PP3M dominated PP1M in 46.9% of 10,000 simulations and was cost-effective at a threshold of 30,000€/QALY gained. PP3M dominated PP1M in all analyses and was, therefore, cost-effective for treating chronic relapsing schizophrenia in Spain. For patients who require long-acting therapy, PP3M appears to be a good alternative anti-psychotic treatment.
NASA Astrophysics Data System (ADS)
Iskandar, Ismed; Satria Gondokaryono, Yudi
2016-02-01
In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range between the true value and the maximum likelihood estimated value lines.
Huang, Min; Lou, Yanyan; Pellissier, James; Burke, Thomas; Liu, Frank Xiaoqing; Xu, Ruifeng; Velcheti, Vamsidhar
2017-02-01
This analysis aimed to evaluate the cost-effectiveness of pembrolizumab compared with docetaxel in patients with previously treated advanced non-squamous cell lung cancer (NSCLC) with PD-L1 positive tumors (total proportion score [TPS] ≥ 50%). The analysis was conducted from a US third-party payer perspective. A partitioned-survival model was developed using data from patients from the KEYNOTE 010 clinical trial. The model used Kaplan-Meier (KM) estimates of progression-free survival (PFS) and overall survival (OS) from the trial for patients treated with either pembrolizumab 2 mg/kg or docetaxel 75 mg/m 2 with extrapolation based on fitted parametric functions and long-term registry data. Quality-adjusted life years (QALYs) were derived based on EQ-5D data from KEYNOTE 010 using a time to death approach. Costs of drug acquisition/administration, adverse event management, and clinical management of advanced NSCLC were included in the model. The base-case analysis used a time horizon of 20 years. Costs and health outcomes were discounted at a rate of 3% per year. A series of one-way and probabilistic sensitivity analyses were performed to test the robustness of the results. Base case results project for PD-L1 positive (TPS ≥50%) patients treated with pembrolizumab a mean survival of 2.25 years. For docetaxel, a mean survival time of 1.07 years was estimated. Expected QALYs were 1.71 and 0.76 for pembrolizumab and docetaxel, respectively. The incremental cost per QALY gained with pembrolizumab vs docetaxel is $168,619/QALY, which is cost-effective in the US using a threshold of 3-times GDP per capita. Sensitivity analyses showed the results to be robust over plausible values of the majority of inputs. Results were most sensitive to extrapolation of overall survival. Pembrolizumab improves survival, increases QALYs, and can be considered as a cost-effective option compared to docetaxel in PD-L1 positive (TPS ≥50%) pre-treated advanced NSCLC patients in the US.
Wasslen, Karl V; Tan, Le Hoa; Manthorpe, Jeffrey M; Smith, Jeffrey C
2014-04-01
Defining cellular processes relies heavily on elucidating the temporal dynamics of proteins. To this end, mass spectrometry (MS) is an extremely valuable tool; different MS-based quantitative proteomics strategies have emerged to map protein dynamics over the course of stimuli. Herein, we disclose our novel MS-based quantitative proteomics strategy with unique analytical characteristics. By passing ethereal diazomethane over peptides on strong cation exchange resin within a microfluidic device, peptides react to contain fixed, permanent positive charges. Modified peptides display improved ionization characteristics and dissociate via tandem mass spectrometry (MS(2)) to form strong a2 fragment ion peaks. Process optimization and determination of reactive functional groups enabled a priori prediction of MS(2) fragmentation patterns for modified peptides. The strategy was tested on digested bovine serum albumin (BSA) and successfully quantified a peptide that was not observable prior to modification. Our method ionizes peptides regardless of proton affinity, thus decreasing ion suppression and permitting predictable multiple reaction monitoring (MRM)-based quantitation with improved sensitivity.
Dual-band plasmonic resonator based on Jerusalem cross-shaped nanoapertures
NASA Astrophysics Data System (ADS)
Cetin, Arif E.; Kaya, Sabri; Mertiri, Alket; Aslan, Ekin; Erramilli, Shyamsunder; Altug, Hatice; Turkmen, Mustafa
2015-06-01
In this paper, we both experimentally and numerically introduce a dual-resonant metamaterial based on subwavelength Jerusalem cross-shaped apertures. We numerically investigate the physical origin of the dual-resonant behavior, originating from the constituting aperture elements, through finite difference time domain calculations. Our numerical calculations show that at the dual-resonances, the aperture system supports large and easily accessible local electromagnetic fields. In order to experimentally realize the aperture system, we utilize a high-precision and lift-off free fabrication method based on electron-beam lithography. We also introduce a fine-tuning mechanism for controlling the dual-resonant spectral response through geometrical device parameters. Finally, we show the aperture system's highly advantageous far- and near-field characteristics through numerical calculations on refractive index sensitivity. The quantitative analyses on the availability of the local fields supported by the aperture system are employed to explain the grounds behind the sensitivity of each spectral feature within the dual-resonant behavior. Possessing dual-resonances with large and accessible electromagnetic fields, Jerusalem cross-shaped apertures can be highly advantageous for wide range of applications demanding multiple spectral features with strong nearfield characteristics.
Casarin, Elisabetta; Lucchese, Laura; Grazioli, Santina; Facchin, Sonia; Realdon, Nicola; Brocchi, Emiliana; Morpurgo, Margherita; Nardelli, Stefano
2016-01-01
Diagnostic tests for veterinary surveillance programs should be efficient, easy to use and, possibly, economical. In this context, classic Enzyme linked ImmunoSorbent Assay (ELISA) remains the most common analytical platform employed for serological analyses. The analysis of pooled samples instead of individual ones is a common procedure that permits to certify, with one single test, entire herds as "disease-free". However, diagnostic tests for pooled samples need to be particularly sensitive, especially when the levels of disease markers are low, as in the case of anti-BoHV1 antibodies in milk as markers of Infectious Bovine Rhinotracheitis (IBR) disease. The avidin-nucleic-acid-nanoassembly (ANANAS) is a novel kind of signal amplification platform for immunodiagnostics based on colloidal poly-avidin nanoparticles that, using model analytes, was shown to strongly increase ELISA test performance as compared to monomeric avidin. Here, for the first time, we applied the ANANAS reagent integration in a real diagnostic context. The monoclonal 1G10 anti-bovine IgG1 antibody was biotinylated and integrated with the ANANAS reagents for indirect IBR diagnosis from pooled milk mimicking tank samples from herds with IBR prevalence between 1 to 8%. The sensitivity and specificity of the ANANAS integrated method was compared to that of a classic test based on the same 1G10 antibody directly linked to horseradish peroxidase, and a commercial IDEXX kit recently introduced in the market. ANANAS integration increased by 5-fold the sensitivity of the 1G10 mAb-based conventional ELISA without loosing specificity. When compared to the commercial kit, the 1G10-ANANAS integrated method was capable to detect the presence of anti-BHV1 antibodies from bulk milk of gE antibody positive animals with 2-fold higher sensitivity and similar specificity. The results demonstrate the potentials of this new amplification technology, which permits improving current classic ELISA sensitivity limits without the need for new hardware investments.
DOT National Transportation Integrated Search
2013-08-01
The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...
Wang, Yuan; Bao, Shan; Du, Wenjun; Ye, Zhirui; Sayer, James R
2017-11-17
This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving. Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual-manual task) and distracted (poststarting of visual-manual task) driving periods. Average relative spectral power in a low frequency range (0-0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses. Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual-manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving. This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.
Johnson, Raymond H.
2007-01-01
In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.
Rubio-Terrés, C; Arístegui Ruiz, I; Medina Redondo, F; Izquierdo Ayuso, G
2003-01-01
To carry out a cost-utility analysis of the treatment of relapsing-remitting multiple sclerosis (RRMS) with glatiramer acetate (copaxone) or interferon beta (all, avonex, rebif and betaferon). A pharmacoeconomic Markov model was used to compare treatment options by simulating the life of a hypothetical cohort of women aged 30, from the societal perspective. The transition probabilities, utilities, resource utilisation and costs (direct and indirect) were obtained from Spanish sources and from bibliography. Univariant sensitivity analyses of the base case were performed. In the base case analysis, the average cost per patient (euro in 2001) for a lifetime treatment, considering a life expectancy of 53 years, would be 1,243,906 euros (euro), 1,818,149 euros, 1,763,263 euros, 1,987,153 euros and 1,704,031 euros with copaxone, all interferons, avonex, rebif and betaferon, respectively. Therefore, the saving with copaxone would range between 460,000 and 737,000 euros approximately. The quality-adjusted life years (QALY) obtained with copaxone or interferons would be 10.977 and 6.917, respectively, with an average gain of 4.060 QALY patient with copaxone. The sensitivity analyses confirmed the robustness of the base case. The interferons would only be superior to copaxone in the unlikely hypothetical case that they delay the progression of the illness by 20% more than that actually observed in clinical trials. For a typical patient with RRMS, treatment with copaxone would be more efficient than interferons and would dominate (would be more efficacious with lower costs) interferon beta.
Fluorescent chemosensor based on sensitive Schiff base for selective detection of Zn2+
NASA Astrophysics Data System (ADS)
Singh, T. Sanjoy; Paul, Pradip C.; Pramanik, Harun A. R.
2014-03-01
A Schiff-base fluorescent compound - N, N‧-bis(salicylidene)-1,2 - phenylenediamine (LH2) was synthesized and evaluated as a chemoselective Zn2+ sensor. Addition of Zn2+ to ethanol solution of LH2 resulted in a red shift with a pronounced enhancement in the fluorescence intensity. Moreover, other common alkali, alkaline earth and transition metal ions failed to induce response or minimal spectral changes. Notably, this chemosensor could distinguish clearly Zn2+ from Cd2+. Fluorescence studies on free Schiff base ligand LH2 and LH2 - Zn2+ complex reveal that the quantum yield strongly increases upon coordination. The stoichiometric ratio and association constant were evaluated using Benesi - Hildebrand relation giving 1:1 stoichiometry. This further corroborated 1:1 complex formation based on Job's plot analyses.
Jones, Roy W; McCrone, Paul; Guilhaume, Chantal
2004-01-01
Clinical trials with memantine, an uncompetitive moderate-affinity NMDA antagonist, have shown improved clinical outcomes, increased independence and a trend towards delayed institutionalisation in patients with moderately severe-to-severe Alzheimer's disease. In a randomised double-blind, placebo-controlled, 28-week study conducted in the US, reductions in resource utilisation and total healthcare costs were noted with memantine relative to placebo. While these findings suggest that, compared with placebo, memantine provides cost savings, further analyses may help to quantify potential economic gains over a longer treatment period. To evaluate the cost effectiveness of memantine therapy compared with no pharmacological treatment in patients with moderately severe-to-severe Alzheimer's disease over a 2-year period. A Markov model was constructed to simulate patient progression through a series of health states related to severity, dependency (determined by patient scores on the Alzheimer's Disease Cooperative Study-Activities of Daily Living [ADCS-ADL] inventory and residential status ('institutionalisation') with a time horizon of 2 years (each 6-month Markov cycle was repeated four times). Transition probabilities from one health state to another 6 months later were mainly derived from a 28-week, randomised, double-blind, placebo-controlled clinical trial. Inputs related to epidemiological and cost data were derived from a UK longitudinal epidemiological study, while data on quality-adjusted life-years (QALYs) were derived from a Danish longitudinal study. To ensure conservative estimates from the model, the base case analysis assumed drug effectiveness was limited to 12 months. Monte Carlo simulations were performed for each state parameter following definition of a priori distributions for the main variables of the model. Sensitivity analyses included worst case scenario in which memantine was effective for 6 months and one-way sensitivity analyses on key parameters. Finally, a subgroup analysis was performed to determine which patients were most likely to benefit from memantine. Informal care was not included in this model as the costs were considered from National Health Service and Personal Social Services perspective. The base case analysis found that, compared with no treatment, memantine was associated with lower costs and greater clinical effectiveness in terms of years of independence, years in the community and QALYs. Sensitivity analyses supported these findings. For each category of Alzheimer's disease patient examined, treatment with memantine was a cost-effective strategy. The greatest economic gain of memantine treatment was in independent patients with a Mini-Mental State Examination score of > or =10. This model suggests that memantine treatment is cost effective and provides cost savings compared with no pharmacological treatment. These benefits appear to result from prolonged patient independence and delayed institutionalisation for moderately severe and severe Alzheimer's disease patients on memantine compared with no pharmacological treatment.
Sensitivity analysis of FeCrAl cladding and U3Si2 fuel under accident conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamble, Kyle Allan Lawrence; Hales, Jason Dean
2016-08-01
The purpose of this milestone report is to highlight the results of sensitivity analyses performed on two accident tol- erant fuel concepts: U3Si2 fuel and FeCrAl cladding. The BISON fuel performance code under development at Idaho National Laboratory was coupled to Sandia National Laboratories’ DAKOTA software to perform the sensitivity analyses. Both Loss of Coolant (LOCA) and Station blackout (SBO) scenarios were analyzed using main effects studies. The results indicate that for FeCrAl cladding the input parameters with greatest influence on the output metrics of interest (fuel centerline temperature and cladding hoop strain) during the LOCA were the isotropic swellingmore » and fuel enrichment. For U3Si2 the important inputs were found to be the intergranular diffusion coefficient, specific heat, and fuel thermal conductivity. For the SBO scenario, Young’s modulus was found to be influential in FeCrAl in addition to the isotropic swelling and fuel enrichment. Contrarily to the LOCA case, the specific heat of U3Si2 was found to have no effect during the SBO. The intergranular diffusion coefficient and fuel thermal conductivity were still found to be of importance. The results of the sensitivity analyses have identified areas where further research is required including fission gas behavior in U3Si2 and irradiation swelling in FeCrAl. Moreover, the results highlight the need to perform the sensitivity analyses on full length fuel rods for SBO scenarios.« less
Should cell-free DNA testing be used to target antenatal rhesus immune globulin administration?
Ma, Kimberly K; Rodriguez, Maria I; Cheng, Yvonne W; Norton, Mary E; Caughey, Aaron B
2016-01-01
To compare the rates of alloimmunization with the use of cell-free DNA (cfDNA) screening to target antenatal rhesus immune globulin (RhIG) prenatally, versus routine administration of RhIG in rhesus D (RhD)-negative pregnant women in a theoretic cohort using a decision-analytic model. A decision-analytic model compared cfDNA testing to routine antenatal RhIG administration. The primary outcome was maternal sensitization to RhD antigen. Sensitivity and specificity of cfDNA testing were assumed to be 99.8% and 95.3%, respectively. Univariate and bivariate sensitivity analyses, Monte Carlo simulation, and threshold analyses were performed. In a cohort of 10,000 RhD-negative women, 22.6 sensitizations would occur with utilization of cfDNA, while 20 sensitizations would occur with routine RhIG. Only when the sensitivity of the cfDNA test reached 100%, the rate of sensitization was equal for both cfDNA and RhIG. Otherwise, routine RhIG minimized the rate of sensitization, especially given RhIG is readily available in the United States. Adoption of cfDNA testing would result in a 13.0% increase in sensitization among RhD-negative women in a theoretical cohort taking into account the ethnic diversity of the United States' population.
NASA Technical Reports Server (NTRS)
Abbott, Mark R.
1996-01-01
The objectives of the last six months were: (1) Complete sensitivity analysis of fluorescence; line height algorithms (2) Deliver fluorescence algorithm code and test data to the University of Miami for integration; (3) Complete analysis of bio-optical data from Southern Ocean cruise; (4) Conduct laboratory experiments based on analyses of field data; (5) Analyze data from bio-optical mooring off Hawaii; (6) Develop calibration/validation plan for MODIS fluorescence data; (7) Respond to the Japanese Research Announcement for GLI; and (8) Continue to review plans for EOSDIS and assist ECS contractor.
msap: a tool for the statistical analysis of methylation-sensitive amplified polymorphism data.
Pérez-Figueroa, A
2013-05-01
In this study msap, an R package which analyses methylation-sensitive amplified polymorphism (MSAP or MS-AFLP) data is presented. The program provides a deep analysis of epigenetic variation starting from a binary data matrix indicating the banding pattern between the isoesquizomeric endonucleases HpaII and MspI, with differential sensitivity to cytosine methylation. After comparing the restriction fragments, the program determines if each fragment is susceptible to methylation (representative of epigenetic variation) or if there is no evidence of methylation (representative of genetic variation). The package provides, in a user-friendly command line interface, a pipeline of different analyses of the variation (genetic and epigenetic) among user-defined groups of samples, as well as the classification of the methylation occurrences in those groups. Statistical testing provides support to the analyses. A comprehensive report of the analyses and several useful plots could help researchers to assess the epigenetic and genetic variation in their MSAP experiments. msap is downloadable from CRAN (http://cran.r-project.org/) and its own webpage (http://msap.r-forge.R-project.org/). The package is intended to be easy to use even for those people unfamiliar with the R command line environment. Advanced users may take advantage of the available source code to adapt msap to more complex analyses. © 2013 Blackwell Publishing Ltd.
Biodiesel Production using Heterogeneous Catalyst in CSTR: Sensitivity Analysis and Optimization
NASA Astrophysics Data System (ADS)
Keong, L. S.; Patle, D. S.; Shukor, S. R.; Ahmad, Z.
2016-03-01
Biodiesel as a renewable fuel has emerged as a potential replacement for petroleum-based diesels. Heterogeneous catalyst has become the focus of researches in biodiesel production with the intention to overcome problems associated with homogeneous catalyzed processes. The simulation of heterogeneous catalyzed biodiesel production has not been thoroughly studied. Hence, a simulation of carbon-based solid acid catalyzed biodiesel production from waste oil with high FFA content (50 weight%) was developed in the present work to study the feasibility and potential of the simulated process. The simulated process produces biodiesel through simultaneous transesterification and esterification with the consideration of reaction kinetics. The developed simulation is feasible and capable to produce 2.81kmol/hr of FAME meeting the international standard (EN 14214). Yields of 68.61% and 97.19% are achieved for transesterification and esterification respectively. Sensitivity analyses of FFA composition in waste oil, methanol to oil ratio, reactor pressure and temperature towards FAME yield from both reactions were carried out. Optimization of reactor temperature was done to maximize FAME products.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.« less
NASA Astrophysics Data System (ADS)
Tan, C.; Fang, W.
2018-04-01
Forest disturbance induced by tropical cyclone often has significant and profound effects on the structure and function of forest ecosystem. Detection and analysis of post-disaster forest disturbance based on remote sensing technology has been widely applied. At present, it is necessary to conduct further quantitative analysis of the magnitude of forest disturbance with the intensity of typhoon. In this study, taking the case of super typhoon Rammasun (201409), we analysed the sensitivity of four common used remote sensing indices and explored the relationship between remote sensing index and corresponding wind speeds based on pre-and post- Landsat-8 OLI (Operational Land Imager) images and a parameterized wind field model. The results proved that NBR is the most sensitive index for the detection of forest disturbance induced by Typhoon Rammasun and the variation of NBR has a significant linear dependence relation with the simulated 3-second gust wind speed.
CADDIS Volume 4. Data Analysis: Advanced Analyses - Controlling for Natural Variability
Methods for controlling natural variability, predicting environmental conditions from biological observations method, biological trait data, species sensitivity distributions, propensity scores, Advanced Analyses of Data Analysis references.
NASA Astrophysics Data System (ADS)
Handayani, Dewi; Cahyaning Putri, Hera; Mahmudah, AMH
2017-12-01
Solo-Ngawi toll road project is part of the mega project of the Trans Java toll road development initiated by the government and is still under construction until now. PT Solo Ngawi Jaya (SNJ) as the Solo-Ngawi toll management company needs to determine the toll fare that is in accordance with the business plan. The determination of appropriate toll rates will affect progress in regional economic sustainability and decrease the traffic congestion. These policy instruments is crucial for achieving environmentally sustainable transport. Therefore, the objective of this research is to find out how the toll fare sensitivity of Solo-Ngawi toll road based on Willingness To Pay (WTP). Primary data was obtained by distributing stated preference questionnaires to four wheeled vehicle users in Kartasura-Palang Joglo artery road segment. Further data obtained will be analysed with logit and probit model. Based on the analysis, it is found that the effect of fare change on the amount of WTP on the binomial logit model is more sensitive than the probit model on the same travel conditions. The range of tariff change against values of WTP on the binomial logit model is 20% greater than the range of values in the probit model . On the other hand, the probability results of the binomial logit model and the binary probit have no significant difference (less than 1%).
Laboratory Diagnosis of Invasive Aspergillosis: From Diagnosis to Prediction of Outcome
Barton, Richard C.
2013-01-01
Invasive aspergillosis (IA), an infection caused by fungi in the genus Aspergillus, is seen in patients with immunological deficits, particularly acute leukaemia and stem cell transplantation, and has been associated with high rates of mortality in previous years. Diagnosing IA has long been problematic owing to the inability to culture the main causal agent A. fumigatus from blood. Microscopic examination and culture of respiratory tract specimens have lacked sensitivity, and biopsy tissue for histopathological examination is rarely obtainable. Thus, for many years there has been a great interest in nonculture-based techniques such as the detection of galactomannan, β-D-glucan, and DNA by PCR-based methods. Recent meta-analyses suggest that these approaches have broadly similar performance parameters in terms of sensitivity and specificity to diagnose IA. Improvements have been made in our understanding of the limitations of antigen assays and the standardisation of PCR-based DNA detection. Thus, in more recent years, the debate has focussed on how these assays can be incorporated into diagnostic strategies to maximise improvements in outcome whilst limiting unnecessary use of antifungal therapy. Furthermore, there is a current interest in applying these tests to monitor the effectiveness of therapy after diagnosis and predict clinical outcomes. The search for improved markers for the early and sensitive diagnosis of IA continues to be a challenge. PMID:24278780
Ghanegolmohammadi, Farzan; Yoshida, Mitsunori; Ohnuki, Shinsuke; Sukegawa, Yuko; Okada, Hiroki; Obara, Keisuke; Kihara, Akio; Suzuki, Kuninori; Kojima, Tetsuya; Yachie, Nozomu; Hirata, Dai; Ohya, Yoshikazu
2017-01-01
We investigated the global landscape of Ca2+ homeostasis in budding yeast based on high-dimensional chemical-genetic interaction profiles. The morphological responses of 62 Ca2+-sensitive (cls) mutants were quantitatively analyzed with the image processing program CalMorph after exposure to a high concentration of Ca2+. After a generalized linear model was applied, an analysis of covariance model was used to detect significant Ca2+–cls interactions. We found that high-dimensional, morphological Ca2+–cls interactions were mixed with positive (86%) and negative (14%) chemical-genetic interactions, whereas one-dimensional fitness Ca2+–cls interactions were all negative in principle. Clustering analysis with the interaction profiles revealed nine distinct gene groups, six of which were functionally associated. In addition, characterization of Ca2+–cls interactions revealed that morphology-based negative interactions are unique signatures of sensitized cellular processes and pathways. Principal component analysis was used to discriminate between suppression and enhancement of the Ca2+-sensitive phenotypes triggered by inactivation of calcineurin, a Ca2+-dependent phosphatase. Finally, similarity of the interaction profiles was used to reveal a connected network among the Ca2+ homeostasis units acting in different cellular compartments. Our analyses of high-dimensional chemical-genetic interaction profiles provide novel insights into the intracellular network of yeast Ca2+ homeostasis. PMID:28566553
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
NASA Astrophysics Data System (ADS)
Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean
2013-04-01
Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary effects and the one-at-a-time approach (O.A.T); and (ii), we applied Sobol's global sensitivity analysis method which is based on variance decompositions. Results illustrate that ψm (maximum sorption rate of mobile colloids), kdmc (solute desorption rate from mobile colloids), and Ks (saturated hydraulic conductivity) are the most sensitive parameters with respect to the contaminant travel time. The analyses indicate that this new module is able to simulate the colloid-facilitated contaminant transport. However, validations under laboratory conditions are needed to confirm the occurrence of the colloid transport phenomenon and to understand model prediction under non-saturated soil conditions. Future work will involve monitoring of the colloidal transport phenomenon through soil column experiments. The anticipated outcome will provide valuable information on the understanding of the dominant mechanisms responsible for colloidal transports, colloid-facilitated contaminant transport and, also, the colloid detachment/deposition processes impacts on soil hydraulic properties. References: Šimůnek, J., C. He, L. Pang, & S. A. Bradford, Colloid-Facilitated Solute Transport in Variably Saturated Porous Media: Numerical Model and Experimental Verification, Vadose Zone Journal, 2006, 5, 1035-1047 Šimůnek, J., M. Šejna, & M. Th. van Genuchten, The C-Ride Module for HYDRUS (2D/3D) Simulating Two-Dimensional Colloid-Facilitated Solute Transport in Variably-Saturated Porous Media, Version 1.0, PC Progress, Prague, Czech Republic, 45 pp., 2012.
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Goeree, Ron; Blackhouse, Gord; Bowen, James M; O'Reilly, Daria; Sutherland, Simone; Hopkins, Robert; Chow, Benjamin; Freeman, Michael; Provost, Yves; Dennie, Carole; Cohen, Eric; Marcuzzi, Dan; Iwanochko, Robert; Moody, Alan; Paul, Narinder; Parker, John D
2013-10-01
Conventional coronary angiography (CCA) is the standard diagnostic for coronary artery disease (CAD), but multi-detector computed tomography coronary angiography (CTCA) is a non-invasive alternative. A multi-center coverage with evidence development study was undertaken and combined with an economic model to estimate the cost-effectiveness of CTCA followed by CCA vs CCA alone. Alternative assumptions were tested in patient scenario and sensitivity analyses. CCA was found to dominate CTCA, however, CTCA was relatively more cost-effective in females, in advancing age, in patients with lower pre-test probabilities of CAD, the higher the sensitivity of CTCA and the lower the probability of undergoing a confirmatory CCA following a positive CTCA. RESULTS were very sensitive to alternative patient populations and modeling assumptions. Careful consideration of patient characteristics, procedures to improve the diagnostic yield of CTCA and selective use of CCA following CTCA will impact whether CTCA is cost-effective or dominates CCA.
NASA Astrophysics Data System (ADS)
Chen, K. S.; Ho, Y. T.; Lai, C. H.; Chou, Youn-Min
The events of high ozone concentrations and meteorological conditions covering the Kaohsiung metropolitan area were investigated based on data analysis and model simulation. A photochemical grid model was employed to analyze two ozone episodes in autumn (2000) and winter (2001) seasons, each covering three consecutive days (or 72 h) in the Kaohsiung City. The potential influence of the initial and boundary conditions on model performance was assessed. Model performance can be improved by separately considering the daytime and nighttime ozone concentrations on the lateral boundary conditions of the model domain. The sensitivity analyses of ozone concentrations to the emission reductions in volatile organic compounds (VOC) and nitrogen oxides (NO x) show a VOC-sensitive regime for emission reductions to lower than 30-40% VOC and 30-50% NO x and a NO x-sensitive regime for larger percentage reductions. Meteorological parameters show that warm temperature, sufficient sunlight, low wind, and high surface pressure are distinct parameters that tend to trigger ozone episodes in polluted urban areas, like Kaohsiung.
NASA Technical Reports Server (NTRS)
Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Righter, K.; Hanna, R. D.; Ketcham, R. A.
2014-01-01
Providing web-based data of complex and sensitive astromaterials (including meteorites and lunar samples) in novel formats enhances existing preliminary examination data on these samples and supports targeted sample requests and analyses. We have developed and tested a rigorous protocol for collecting highly detailed imagery of meteorites and complex lunar samples in non-contaminating environments. These data are reduced to create interactive 3D models of the samples. We intend to provide these data as they are acquired on NASA's Astromaterials Acquisition and Curation website at http://curator.jsc.nasa.gov/.
Efficient exploration of chemical space by fragment-based screening.
Hall, Richard J; Mortenson, Paul N; Murray, Christopher W
2014-01-01
Screening methods seek to sample a vast chemical space in order to identify starting points for further chemical optimisation. Fragment based drug discovery exploits the superior sampling of chemical space that can be achieved when the molecular weight is restricted. Here we show that commercially available fragment space is still relatively poorly sampled and argue for highly sensitive screening methods to allow the detection of smaller fragments. We analyse the properties of our fragment library versus the properties of X-ray hits derived from the library. We particularly consider properties related to the degree of planarity of the fragments. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improved neutron-gamma discrimination for a 3He neutron detector using subspace learning methods
Wang, C. L.; Funk, L. L.; Riedel, R. A.; ...
2017-02-10
3He gas based neutron linear-position-sensitive detectors (LPSDs) have been applied for many neutron scattering instruments. Traditional Pulse-Height Analysis (PHA) for Neutron-Gamma Discrimination (NGD) resulted in the neutron-gamma efficiency ratio on the orders of 10 5-10 6. The NGD ratios of 3He detectors need to be improved for even better scientific results from neutron scattering. Digital Signal Processing (DSP) analyses of waveforms were proposed for obtaining better NGD ratios, based on features extracted from rise-time, pulse amplitude, charge integration, a simplified Wiener filter, and the cross-correlation between individual and template waveforms of neutron and gamma events. Fisher linear discriminant analysis (FLDA)more » and three multivariate analyses (MVAs) of the features were performed. The NGD ratios are improved by about 10 2-10 3 times compared with the traditional PHA method. Finally, our results indicate the NGD capabilities of 3He tube detectors can be significantly improved with subspace-learning based methods, which may result in a reduced data-collection time and better data quality for further data reduction.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xuanlu M.; Louie, Alexander V.; Ashman, Jonathan
Purpose: Surgery combined with radiation therapy (RT) is the cornerstone of multidisciplinary management of extremity soft tissue sarcoma (STS). Although RT can be given in either the preoperative or the postoperative setting with similar local recurrence and survival outcomes, the side effect profiles, costs, and long-term functional outcomes are different. The aim of this study was to use decision analysis to determine optimal sequencing of RT with surgery in patients with extremity STS. Methods and Materials: A cost-effectiveness analysis was conducted using a state transition Markov model, with quality-adjusted life years (QALYs) as the primary outcome. A time horizon ofmore » 5 years, a cycle length of 3 months, and a willingness-to-pay threshold of $50,000/QALY was used. One-way deterministic sensitivity analyses were performed to determine the thresholds at which each strategy would be preferred. The robustness of the model was assessed by probabilistic sensitivity analysis. Results: Preoperative RT is a more cost-effective strategy ($26,633/3.00 QALYs) than postoperative RT ($28,028/2.86 QALYs) in our base case scenario. Preoperative RT is the superior strategy with either 3-dimensional conformal RT or intensity-modulated RT. One-way sensitivity analyses identified the relative risk of chronic adverse events as having the greatest influence on the preferred timing of RT. The likelihood of preoperative RT being the preferred strategy was 82% on probabilistic sensitivity analysis. Conclusions: Preoperative RT is more cost effective than postoperative RT in the management of resectable extremity STS, primarily because of the higher incidence of chronic adverse events with RT in the postoperative setting.« less
Herrero Babiloni, Alberto; Nixdorf, Donald R; Law, Alan S; Moana-Filho, Estephan J; Shueb, Sarah S; Nguyen, Ruby H; Durham, Justin
2017-01-01
To evaluate the accuracy of a questionnaire modified for the identification of intraoral pain with neuropathic characteristics in a clinical orofacial pain sample population. 136 participants with at least one of four orofacial pain diagnoses (temporomandibular disorders [TMD, n = 41], acute dental pain [ADP, n = 41], trigeminal neuralgia [TN, n = 19], persistent dentoalveolar pain disorder [PDAP, n = 14]) and a group of pain-free controls (n = 21) completed the modified S-LANSS, a previously adapted version of the original questionnaire devised to detected patients suffering from intraoral pain with neuropathic characteristics. Psychometric properties (sensitivity, specificity, positive predictive value [PPV], negative predictive value [NPV]) were calculated in two analyses with two different thresholds: (1) Detection of pain with neuropathic characteristics: PDAP + TN were considered positive, and TMD + ADP + controls were considered negative per gold standard (expert opinion). (2) Detection of PDAP: PDAP was considered positive and TMD + ADP were considered negative per gold standard. For both analyses, target values for adequate sensitivity and specificity were defined as ≥ 80%. For detection of orofacial pain with neuropathic characteristics (PDAP + TN), the modified S-LANSS presented with the most optimistic threshold sensitivity of 52% (95% confidence interval [CI], 34-69), specificity of 70% (95% CI, 60-79), PPV of 35% (95% CI, 22-51), and NPV of 82% (95% CI, 72-89). For detection of PDAP only, with the most optimistic threshold sensitivity was 64% (95% CI, 35-87), specificity 63% (95% CI, 52-74), PPV 23% (95% CI, 11-39) and NPV 91% (95% CI, 81-97). Based on a priori defined criteria, the modified S-LANSS did not show adequate accuracy to detect intraoral pain with neuropathic characteristics in a clinical orofacial pain sample.
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Wang, Cuiling; Chen, Yanhui; Ku, Lixia; Wang, Tiegu; Sun, Zhaohui; Cheng, Fangfang; Wu, Liancheng
2010-01-01
Background An understanding of the genetic determinism of photoperiod response of flowering is a prerequisite for the successful exchange of germplasm across different latitudes. In order to contribute to resolve the genetic basis of photoperiod sensitivity in maize, a set of 201 recombinant inbred lines (RIL), derived from a temperate and tropical inbred line cross were evaluated in 5 field trials spread in short- and long-day environments. Methodology/Principal Findings Firstly, QTL analyses for flowering time and photoperiod sensitivity in maize were conducted in individual photoperiod environments separately, and then, the total genetic effect was partitioned into additive effect (A) and additive-by-environment interaction effect (AE) by using a mixed-model-based composite interval mapping (MCIM) method. Conclusions/Significance Seven putative QTL were found associated with DPS thermal time based on the data estimated in individual environments. Nine putative QTL were found associated with DPS thermal time across environments and six of them showed significant QTL×enviroment (QE) interactions. Three QTL for photoperiod sensitivity were identified on chromosome 4, 9 and 10, which had the similar position to QTL for DPS thermal time in the two long-day environment. The major photoperiod sensitive loci qDPS10 responded to both short and long-day photoperiod environments and had opposite effects in different photoperiod environment. The QTL qDPS3, which had the greatest additive effect exclusively in the short-day environment, were photoperiod independent and should be classified in autonomous promotion pathway. PMID:21124912
Sinha, Richa; Redekop, William Ken
2018-02-01
Ibrutinib shows superiority over obinutuzumab with chlorambucil (G-Clb) in untreated patients with chronic lymphocytic leukemia with comorbidities who cannot tolerate fludarabine-based therapy. However, ibrutinib is relatively more expensive than G-Clb. In this study we evaluated the cost-effectiveness of ibrutinib compared with G-Clb from the United Kingdom (UK) health care perspective. A 3-state semi-Markov model was parameterized to estimate the lifetime costs and benefits associated with ibrutinib compared with G-Clb as first-line treatment. Idelalisib with rituximab was considered as second-line treatment. Unit costs were derived from standard sources, (dis)utilities from UK elicitation studies, progression-free survival, progression, and death from clinical trials, and postprogression survival and background mortality from published sources. Additional analyses included threshold analyses with ibrutinib and idelalisib at various discount rates, and scenario analysis with ibrutinib as second-line treatment after G-Clb. An average gain of 1.49 quality-adjusted life-years (QALYs) was estimated for ibrutinib compared with G-Clb at an average additional cost of £112,835 per patient. To be cost-effective as per the UK thresholds, ibrutinib needs to be discounted at 30%, 40%, and 50% if idelalisib is discounted at 0%, 25%, and 50% respectively. The incremental cost-effectiveness ratio was £75,648 and £-143,279 per QALY gained for the base-case and scenario analyses, respectively. Sensitivity analyses showed the robustness of the results. As per base-case analyses, an adequate discount on ibrutinib is required to make it cost-effective as per the UK thresholds. The scenario analysis substantiates ibrutinib's cost-savings for the UK National Health Services and advocates patient's access to ibrutinib in the UK. Copyright © 2017 Elsevier Inc. All rights reserved.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Hikone, Yuya; Hirai, Go; Mishima, Masaki; Inomata, Kohsuke; Ikeya, Teppei; Arai, Souichiro; Shirakawa, Masahiro; Sodeoka, Mikiko; Ito, Yutaka
2016-10-01
Structural analyses of proteins under macromolecular crowding inside human cultured cells by in-cell NMR spectroscopy are crucial not only for explicit understanding of their cellular functions but also for applications in medical and pharmaceutical sciences. In-cell NMR experiments using human cultured cells however suffer from low sensitivity, thus pseudocontact shifts from protein-tagged paramagnetic lanthanoid ions, analysed using sensitive heteronuclear two-dimensional correlation NMR spectra, offer huge potential advantage in obtaining structural information over conventional NOE-based approaches. We synthesised a new lanthanoid-chelating tag (M8-CAM-I), in which the eight-fold, stereospecifically methylated DOTA (M8) scaffold was retained, while a stable carbamidemethyl (CAM) group was introduced as the functional group connecting to proteins. M8-CAM-I successfully fulfilled the requirements for in-cell NMR: high-affinity to lanthanoid, low cytotoxicity and the stability under reducing condition inside cells. Large PCSs for backbone N-H resonances observed for M8-CAM-tagged human ubiquitin mutant proteins, which were introduced into HeLa cells by electroporation, demonstrated that this approach readily provides the useful information enabling the determination of protein structures, relative orientations of domains and protein complexes within human cultured cells.
Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi.
Larsen-Cooper, Erin; Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann
2016-04-01
The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators.
Scale Matters: A Cost-Outcome Analysis of an m-Health Intervention in Malawi
Bancroft, Emily; Rajagopal, Sharanya; O'Toole, Maggie; Levin, Ann
2016-01-01
Abstract Background: The primary objectives of this study are to determine cost per user and cost per contact with users of a mobile health (m-health) intervention. The secondary objectives are to map costs to changes in maternal, newborn, and child health (MNCH) and to estimate costs of alternate implementation and usage scenarios. Materials and Methods: A base cost model, constructed from recurrent costs and selected capital costs, was used to estimate average cost per user and per contact of an m-health intervention. This model was mapped to statistically significant changes in MNCH intermediate outcomes to determine the cost of improvements in MNCH indicators. Sensitivity analyses were conducted to estimate costs in alternate scenarios. Results: The m-health intervention cost $29.33 per user and $4.33 per successful contact. The average cost for each user experiencing a change in an MNCH indicator ranged from $67 to $355. The sensitivity analyses showed that cost per user could be reduced by 48% if the service were to operate at full capacity. Conclusions: We believe that the intervention, operating at scale, has potential to be a cost-effective method for improving maternal and child health indicators. PMID:26348994
Optical imaging of RNAi-mediated silencing of cancer
NASA Astrophysics Data System (ADS)
Ochiya, Takahiro; Honma, Kimi; Takeshita, Fumitaka; Nagahara, Shunji
2008-02-01
RNAi has rapidly become a powerful tool for drug target discovery and validation in an in vitro culture system and, consequently, interest is rapidly growing for extension of its application to in vivo systems, such as animal disease models and human therapeutics. Cancer is one obvious application for RNAi therapeutics, because abnormal gene expression is thought to contribute to the pathogenesis and maintenance of the malignant phenotype of cancer and thereby many oncogenes and cell-signaling molecules present enticing drug target possibilities. RNAi, potent and specific, could silence tumor-related genes and would appear to be a rational approach to inhibit tumor growth. In subsequent in vivo studies, the appropriate cancer model must be developed for an evaluation of siRNA effects on tumors. How to evaluate the effect of siRNA in an in vivo therapeutic model is also important. Accelerating the analyses of these models and improving their predictive value through whole animal imaging methods, which provide cancer inhibition in real time and are sensitive to subtle changes, are crucial for rapid advancement of these approaches. Bioluminescent imaging is one of these optically based imaging methods that enable rapid in vivo analyses of a variety of cellular and molecular events with extreme sensitivity.
Evaluation of an optoacoustic based gas analysing device
NASA Astrophysics Data System (ADS)
Markmann, Janine; Lange, Birgit; Theisen-Kunde, Dirk; Danicke, Veit; Mayorov, Fedor; Eckert, Sebastian; Kettmann, Pascal; Brinkmann, Ralf
2017-07-01
The relative occurrence of volatile organic compounds in the human respiratory gas is disease-specific (ppb range). A prototype of a gas analysing device using two tuneable laser systems, an OPO-laser (2.5 to 10 μm) and a CO2-laser (9 to 11 μm), and an optoacoustic measurement cell was developed to detect concentrations in the ppb range. The sensitivity and resolution of the system was determined by test gas measurements, measuring ethylene and sulfur hexafluoride with the CO2-laser and butane with the OPO-laser. System sensitivity found to be 13 ppb for sulfur hexafluoride, 17 ppb for ethylene and <10 ppb for butane, with a resolution of 50 ppb at minimum for sulfur hexafluoride. Respiratory gas samples of 8 healthy volunteers were investigated by irradiation with 17 laser lines of the CO2-laser. Several of those lines overlap with strong absorption bands of ammonia. As it is known that ammonia concentration increases by age a separation of people <35 und >35 was striven for. To evaluate the data the first seven gas samples were used to train a discriminant analysis algorithm. The eighth subject was then assigned correctly to the group >35 years with the age of 49 years.
A systematic analysis of commonly used antibodies in cancer diagnostics.
Gremel, Gabriela; Bergman, Julia; Djureinovic, Dijana; Edqvist, Per-Henrik; Maindad, Vikas; Bharambe, Bhavana M; Khan, Wasif Ali Z A; Navani, Sanjay; Elebro, Jacob; Jirström, Karin; Hellberg, Dan; Uhlén, Mathias; Micke, Patrick; Pontén, Fredrik
2014-01-01
Immunohistochemistry plays a pivotal role in cancer differential diagnostics. To identify the primary tumour from a metastasis specimen remains a significant challenge, despite the availability of an increasing number of antibodies. The aim of the present study was to provide evidence-based data on the diagnostic power of antibodies used frequently for clinical differential diagnostics. A tissue microarray cohort comprising 940 tumour samples, of which 502 were metastatic lesions, representing tumours from 18 different organs and four non-localized cancer types, was analysed using immunohistochemistry with 27 well-established antibodies used in clinical differential diagnostics. Few antibodies, e.g. prostate-specific antigen and thyroglobulin, showed a cancer type-related sensitivity and specificity of more than 95%. A majority of the antibodies showed a low degree of sensitivity and specificity for defined cancer types. Combinations of antibodies provided limited added value for differential diagnostics of cancer types. The results from analysing 27 diagnostic antibodies on consecutive sections of 940 defined tumours provide a unique repository of data that can empower a more optimal use of clinical immunohistochemistry. Our results highlight the benefit of immunohistochemistry and the unmet need for novel markers to improve differential diagnostics of cancer. © 2013 John Wiley & Sons Ltd.
Performance comparison of Islamic and commercial banks in Malaysia
NASA Astrophysics Data System (ADS)
Azizud-din, Azimah; Hussin, Siti Aida Sheikh; Zahid, Zalina
2016-10-01
The steady growth in the size and increase in the number of Islamic banks show that the Islamic banking system is considered as an alternative to the conventional banking system. Due to this, comparisons in term of performance measurements and evaluation of the financial health for both type of banks are essential. The main purpose of this study is to analyse the differences between Islamic and commercial banks performance. Five years secondary data were collected from the annual report for each bank. Return on Asset ratio is chosen as the dependent variable, while capital adequacy, asset quality, management quality, earning, liquidity and sensitivity to market risk (CAMELS) are the independent variables. Descriptive analyses were done to understand the data. The independent t-test and Mann Whitney test show the differences of Islamic and commercial banks based on the financial variables. The stepwise and hierarchical multiple regressions were used to determine the factor that affects profitability performance of banks. Results show that Islamic banks are better in term of profitability performance, earning power performance, liquidity performance and sensitive to market risk. The factors that affect profitability performance are capital adequacy, earning power and liquidity variable.
Gama-Arachchige, N. S.; Baskin, J. M.; Geneve, R. L.; Baskin, C. C.
2013-01-01
Background and Aims Physical dormancy (PY)-break in some annual plant species is a two-step process controlled by two different temperature and/or moisture regimes. The thermal time model has been used to quantify PY-break in several species of Fabaceae, but not to describe stepwise PY-break. The primary aims of this study were to quantify the thermal requirement for sensitivity induction by developing a thermal time model and to propose a mechanism for stepwise PY-breaking in the winter annual Geranium carolinianum. Methods Seeds of G. carolinianum were stored under dry conditions at different constant and alternating temperatures to induce sensitivity (step I). Sensitivity induction was analysed based on the thermal time approach using the Gompertz function. The effect of temperature on step II was studied by incubating sensitive seeds at low temperatures. Scanning electron microscopy, penetrometer techniques, and different humidity levels and temperatures were used to explain the mechanism of stepwise PY-break. Key Results The base temperature (Tb) for sensitivity induction was 17·2 °C and constant for all seed fractions of the population. Thermal time for sensitivity induction during step I in the PY-breaking process agreed with the three-parameter Gompertz model. Step II (PY-break) did not agree with the thermal time concept. Q10 values for the rate of sensitivity induction and PY-break were between 2·0 and 3·5 and between 0·02 and 0·1, respectively. The force required to separate the water gap palisade layer from the sub-palisade layer was significantly reduced after sensitivity induction. Conclusions Step I and step II in PY-breaking of G. carolinianum are controlled by chemical and physical processes, respectively. This study indicates the feasibility of applying the developed thermal time model to predict or manipulate sensitivity induction in seeds with two-step PY-breaking processes. The model is the first and most detailed one yet developed for sensitivity induction in PY-break. PMID:23456728