Sensitivity analysis of static resistance of slender beam under bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Valeš, Jan
2016-06-08
The paper deals with statical and sensitivity analyses of resistance of simply supported I-beams under bending. The resistance was solved by geometrically nonlinear finite element method in the programme Ansys. The beams are modelled with initial geometrical imperfections following the first eigenmode of buckling. Imperfections were, together with geometrical characteristics of cross section, and material characteristics of steel, considered as random quantities. The method Latin Hypercube Sampling was applied to evaluate statistical and sensitivity resistance analyses.
Evaluation of microarray data normalization procedures using spike-in experiments
Rydén, Patrik; Andersson, Henrik; Landfors, Mattias; Näslund, Linda; Hartmanová, Blanka; Noppa, Laila; Sjöstedt, Anders
2006-01-01
Background Recently, a large number of methods for the analysis of microarray data have been proposed but there are few comparisons of their relative performances. By using so-called spike-in experiments, it is possible to characterize the analyzed data and thereby enable comparisons of different analysis methods. Results A spike-in experiment using eight in-house produced arrays was used to evaluate established and novel methods for filtration, background adjustment, scanning, channel adjustment, and censoring. The S-plus package EDMA, a stand-alone tool providing characterization of analyzed cDNA-microarray data obtained from spike-in experiments, was developed and used to evaluate 252 normalization methods. For all analyses, the sensitivities at low false positive rates were observed together with estimates of the overall bias and the standard deviation. In general, there was a trade-off between the ability of the analyses to identify differentially expressed genes (i.e. the analyses' sensitivities) and their ability to provide unbiased estimators of the desired ratios. Virtually all analysis underestimated the magnitude of the regulations; often less than 50% of the true regulations were observed. Moreover, the bias depended on the underlying mRNA-concentration; low concentration resulted in high bias. Many of the analyses had relatively low sensitivities, but analyses that used either the constrained model (i.e. a procedure that combines data from several scans) or partial filtration (a novel method for treating data from so-called not-found spots) had with few exceptions high sensitivities. These methods gave considerable higher sensitivities than some commonly used analysis methods. Conclusion The use of spike-in experiments is a powerful approach for evaluating microarray preprocessing procedures. Analyzed data are characterized by properties of the observed log-ratios and the analysis' ability to detect differentially expressed genes. If bias is not a major problem; we recommend the use of either the CM-procedure or partial filtration. PMID:16774679
Andronis, L; Barton, P; Bryan, S
2009-06-01
To determine how we define good practice in sensitivity analysis in general and probabilistic sensitivity analysis (PSA) in particular, and to what extent it has been adhered to in the independent economic evaluations undertaken for the National Institute for Health and Clinical Excellence (NICE) over recent years; to establish what policy impact sensitivity analysis has in the context of NICE, and policy-makers' views on sensitivity analysis and uncertainty, and what use is made of sensitivity analysis in policy decision-making. Three major electronic databases, MEDLINE, EMBASE and the NHS Economic Evaluation Database, were searched from inception to February 2008. The meaning of 'good practice' in the broad area of sensitivity analysis was explored through a review of the literature. An audit was undertaken of the 15 most recent NICE multiple technology appraisal judgements and their related reports to assess how sensitivity analysis has been undertaken by independent academic teams for NICE. A review of the policy and guidance documents issued by NICE aimed to assess the policy impact of the sensitivity analysis and the PSA in particular. Qualitative interview data from NICE Technology Appraisal Committee members, collected as part of an earlier study, were also analysed to assess the value attached to the sensitivity analysis components of the economic analyses conducted for NICE. All forms of sensitivity analysis, notably both deterministic and probabilistic approaches, have their supporters and their detractors. Practice in relation to univariate sensitivity analysis is highly variable, with considerable lack of clarity in relation to the methods used and the basis of the ranges employed. In relation to PSA, there is a high level of variability in the form of distribution used for similar parameters, and the justification for such choices is rarely given. Virtually all analyses failed to consider correlations within the PSA, and this is an area of concern. Uncertainty is considered explicitly in the process of arriving at a decision by the NICE Technology Appraisal Committee, and a correlation between high levels of uncertainty and negative decisions was indicated. The findings suggest considerable value in deterministic sensitivity analysis. Such analyses serve to highlight which model parameters are critical to driving a decision. Strong support was expressed for PSA, principally because it provides an indication of the parameter uncertainty around the incremental cost-effectiveness ratio. The review and the policy impact assessment focused exclusively on documentary evidence, excluding other sources that might have revealed further insights on this issue. In seeking to address parameter uncertainty, both deterministic and probabilistic sensitivity analyses should be used. It is evident that some cost-effectiveness work, especially around the sensitivity analysis components, represents a challenge in making it accessible to those making decisions. This speaks to the training agenda for those sitting on such decision-making bodies, and to the importance of clear presentation of analyses by the academic community.
The Intercultural Sensitivity of Chilean Teachers Serving an Immigrant Population in Schools
ERIC Educational Resources Information Center
Morales Mendoza, Karla; Sanhueza Henríquez, Susan; Friz Carrillo, Miguel; Riquelme Bravo, Paula
2017-01-01
The objective of this article is to evaluate the intercultural sensitivity of teachers working in culturally diverse classrooms, and to analyse differences in intercultural sensitivity based on the gender, age, training (advanced training courses), and intercultural experience of the teachers. A quantitative approach with a comparative descriptive…
DOT National Transportation Integrated Search
2013-08-01
The overall goal of Global Sensitivity Analysis (GSA) is to determine sensitivity of pavement performance prediction models to the variation in the design input values. The main difference between GSA and detailed sensitivity analyses is the way the ...
NASA Technical Reports Server (NTRS)
Greene, William H.
1989-01-01
A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.
Pasta, D J; Taylor, J L; Henning, J M
1999-01-01
Decision-analytic models are frequently used to evaluate the relative costs and benefits of alternative therapeutic strategies for health care. Various types of sensitivity analysis are used to evaluate the uncertainty inherent in the models. Although probabilistic sensitivity analysis is more difficult theoretically and computationally, the results can be much more powerful and useful than deterministic sensitivity analysis. The authors show how a Monte Carlo simulation can be implemented using standard software to perform a probabilistic sensitivity analysis incorporating the bootstrap. The method is applied to a decision-analytic model evaluating the cost-effectiveness of Helicobacter pylori eradication. The necessary steps are straightforward and are described in detail. The use of the bootstrap avoids certain difficulties encountered with theoretical distributions. The probabilistic sensitivity analysis provided insights into the decision-analytic model beyond the traditional base-case and deterministic sensitivity analyses and should become the standard method for assessing sensitivity.
Application of design sensitivity analysis for greater improvement on machine structural dynamics
NASA Technical Reports Server (NTRS)
Yoshimura, Masataka
1987-01-01
Methodologies are presented for greatly improving machine structural dynamics by using design sensitivity analyses and evaluative parameters. First, design sensitivity coefficients and evaluative parameters of structural dynamics are described. Next, the relations between the design sensitivity coefficients and the evaluative parameters are clarified. Then, design improvement procedures of structural dynamics are proposed for the following three cases: (1) addition of elastic structural members, (2) addition of mass elements, and (3) substantial charges of joint design variables. Cases (1) and (2) correspond to the changes of the initial framework or configuration, and (3) corresponds to the alteration of poor initial design variables. Finally, numerical examples are given for demonstrating the availability of the methods proposed.
ERIC Educational Resources Information Center
Zimmer, Ron; Engberg, John
2016-01-01
School choice programs continue to be controversial, spurring a number of researchers into evaluating them. When possible, researchers evaluate the effect of attending a school of choice using randomized designs to eliminate possible selection bias. Randomized designs are often thought of as the gold standard for research, but many circumstances…
Performance of Stratified and Subgrouped Disproportionality Analyses in Spontaneous Databases.
Seabroke, Suzie; Candore, Gianmario; Juhlin, Kristina; Quarcoo, Naashika; Wisniewski, Antoni; Arani, Ramin; Painter, Jeffery; Tregunno, Philip; Norén, G Niklas; Slattery, Jim
2016-04-01
Disproportionality analyses are used in many organisations to identify adverse drug reactions (ADRs) from spontaneous report data. Reporting patterns vary over time, with patient demographics, and between different geographical regions, and therefore subgroup analyses or adjustment by stratification may be beneficial. The objective of this study was to evaluate the performance of subgroup and stratified disproportionality analyses for a number of key covariates within spontaneous report databases of differing sizes and characteristics. Using a reference set of established ADRs, signal detection performance (sensitivity and precision) was compared for stratified, subgroup and crude (unadjusted) analyses within five spontaneous report databases (two company, one national and two international databases). Analyses were repeated for a range of covariates: age, sex, country/region of origin, calendar time period, event seriousness, vaccine/non-vaccine, reporter qualification and report source. Subgroup analyses consistently performed better than stratified analyses in all databases. Subgroup analyses also showed benefits in both sensitivity and precision over crude analyses for the larger international databases, whilst for the smaller databases a gain in precision tended to result in some loss of sensitivity. Additionally, stratified analyses did not increase sensitivity or precision beyond that associated with analytical artefacts of the analysis. The most promising subgroup covariates were age and region/country of origin, although this varied between databases. Subgroup analyses perform better than stratified analyses and should be considered over the latter in routine first-pass signal detection. Subgroup analyses are also clearly beneficial over crude analyses for larger databases, but further validation is required for smaller databases.
Review of nutritional screening and assessment tools and clinical outcomes in heart failure.
Lin, Hong; Zhang, Haifeng; Lin, Zheng; Li, Xinli; Kong, Xiangqin; Sun, Gouzhen
2016-09-01
Recent studies have suggested that undernutrition as defined using multidimensional nutritional evaluation tools may affect clinical outcomes in heart failure (HF). The evidence supporting this correlation is unclear. Therefore, we conducted this systematic review to critically appraise the use of multidimensional evaluation tools in the prediction of clinical outcomes in HF. We performed descriptive analyses of all identified articles involving qualitative analyses. We used STATA to conduct meta-analyses when at least three studies that tested the same type of nutritional assessment or screening tools and used the same outcome were identified. Sensitivity analyses were conducted to validate our positive results. We identified 17 articles with qualitative analyses and 11 with quantitative analysis after comprehensive literature searching and screening. We determined that the prevalence of malnutrition is high in HF (range 16-90 %), particularly in advanced and acute decompensated HF (approximate range 75-90 %). Undernutrition as identified by multidimensional evaluation tools may be significantly associated with hospitalization, length of stay and complications and is particularly strongly associated with high mortality. The meta-analysis revealed that compared with other tools, Mini Nutritional Assessment (MNA) scores were the strongest predictors of mortality in HF [HR (4.32, 95 % CI 2.30-8.11)]. Our results remained reliable after conducting sensitivity analyses. The prevalence of malnutrition is high in HF, particularly in advanced and acute decompensated HF. Moreover, undernutrition as identified by multidimensional evaluation tools is significantly associated with unfavourable prognoses and high mortality in HF.
NASA Astrophysics Data System (ADS)
Lin, Y.; Li, W. J.; Yu, J.; Wu, C. Z.
2018-04-01
Remote sensing technology is of significant advantages for monitoring and analysing ecological environment. By using of automatic extraction algorithm, various environmental resources information of tourist region can be obtained from remote sensing imagery. Combining with GIS spatial analysis and landscape pattern analysis, relevant environmental information can be quantitatively analysed and interpreted. In this study, taking the Chaohu Lake Basin as an example, Landsat-8 multi-spectral satellite image of October 2015 was applied. Integrated the automatic ELM (Extreme Learning Machine) classification results with the data of digital elevation model and slope information, human disturbance degree, land use degree, primary productivity, landscape evenness , vegetation coverage, DEM, slope and normalized water body index were used as the evaluation factors to construct the eco-sensitivity evaluation index based on AHP and overlay analysis. According to the value of eco-sensitivity evaluation index, by using of GIS technique of equal interval reclassification, the Chaohu Lake area was divided into four grades: very sensitive area, sensitive area, sub-sensitive areas and insensitive areas. The results of the eco-sensitivity analysis shows: the area of the very sensitive area was 4577.4378 km2, accounting for about 37.12 %, the sensitive area was 5130.0522 km2, accounting for about 37.12 %; the area of sub-sensitive area was 3729.9312 km2, accounting for 26.99 %; the area of insensitive area was 382.4399 km2, accounting for about 2.77 %. At the same time, it has been found that there were spatial differences in ecological sensitivity of the Chaohu Lake basin. The most sensitive areas were mainly located in the areas with high elevation and large terrain gradient. Insensitive areas were mainly distributed in slope of the slow platform area; the sensitive areas and the sub-sensitive areas were mainly agricultural land and woodland. Through the eco-sensitivity analysis of the study area, the automatic recognition and analysis techniques for remote sensing imagery are integrated into the ecological analysis and ecological regional planning, which can provide a reliable scientific basis for rational planning and regional sustainable development of the Chaohu Lake tourist area.
Skin sensitizer identification by IL-8 secretion and CD86 expression on THP-1 cells.
Parise, Carolina Bellini; Sá-Rocha, Vanessa Moura; Moraes, Jane Zveiter
2015-12-25
Substantial progress has been made in the development of alternative methods for skin sensitization in the last decade in several countries around the world. Brazil is experiencing an increasing concern about using animals for product development, since the publication of the Law 9605/1998, which prohibits the use of animals when an alternative method is available. In this way, an in vitro test to evaluate allergenic potential is a pressing need.This preliminary study started setting the use of myelomonocytic THP-1 cell line, according to the human cell line activation test (h-CLAT), already under validation process. We found that 48-h chemical exposure was necessary to identify 22 out of 23 sensitizers by the analyses of CD86 expression. In addition, the CD54 expression analyses presented a poor efficiency to discriminate sensitizers from non-sensitizers in our conditions. In view of these results, we looked for changes of pro-inflammatory interleukin profile. The IL-8 secretion analyses after 24-h chemical incubation seemed to be an alternative for CD54 expression assessing.Altogether, our findings showed that the combination of the analyses of CD86 expression and IL-8 secretion allowed predicting allergenicity.
Effectiveness of Light Sources on In-Office Dental Bleaching: A Systematic Review and Meta-Analyses.
SoutoMaior, J R; de Moraes, Sld; Lemos, Caa; Vasconcelos, Bc do E; Montes, Majr; Pellizzer, E P
2018-06-12
A systematic review and meta-analyses were performed to evaluate the efficacy of tooth color change and sensitivity of teeth following in-office bleaching with and without light gel activation in adult patients. This review was registered at PROSPERO (CRD 42017060574) and is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Electronic systematic searches of PubMed/MEDLINE, Web of Science, and the Cochrane Library were conducted for published articles. Only randomized clinical trials among adults that compared in-office bleaching with and without light activation with the same bleaching gel concentrations were selected. The outcomes were tooth color change and tooth sensitivity prevalence and intensity. Twenty-three articles from 1054 data sources met the eligibility criteria. After title and abstract screening, 39 studies remained. Sixteen studies were further excluded. Twenty-three studies remained for qualitative analyses and 20 for meta-analyses of primary and secondary outcomes. No significant differences in tooth color change or tooth sensitivity incidence were found between the compared groups; however, tooth sensitivity intensity decreased when light sources were applied. The use of light sources for in-office bleaching is not imperative to achieve esthetic clinical results.
Wang, Yong; Fujii, Takeshi
2011-01-01
It is important in molecular biological analyses to evaluate contamination of co-extracted humic acids in DNA/RNA extracted from soil. We compared the sensitivity of various methods for measurement of humic acids, and influences of DNA/RNA and proteins on the measurement. Considering the results, we give suggestions as to choice of methods for measurement of humic acids in molecular biological analyses.
The Negative Affect Hypothesis of Noise Sensitivity
Shepherd, Daniel; Heinonen-Guzejev, Marja; Heikkilä, Kauko; Dirks, Kim N.; Hautus, Michael J.; Welch, David; McBride, David
2015-01-01
Some studies indicate that noise sensitivity is explained by negative affect, a dispositional tendency to negatively evaluate situations and the self. Individuals high in such traits may report a greater sensitivity to other sensory stimuli, such as smell, bright light and pain. However, research investigating the relationship between noise sensitivity and sensitivity to stimuli associated with other sensory modalities has not always supported the notion of a common underlying trait, such as negative affect, driving them. Additionally, other explanations of noise sensitivity based on cognitive processes have existed in the clinical literature for over 50 years. Here, we report on secondary analyses of pre-existing laboratory (n = 74) and epidemiological (n = 1005) data focusing on the relationship between noise sensitivity to and annoyance with a variety of olfactory-related stimuli. In the first study a correlational design examined the relationships between noise sensitivity, noise annoyance, and perceptual ratings of 16 odors. The second study sought differences between mean noise and air pollution annoyance scores across noise sensitivity categories. Results from both analyses failed to support the notion that, by itself, negative affectivity explains sensitivity to noise. PMID:25993104
Neutron Physics Division progress report for period ending February 28, 1977
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maienschein, F.C.
1977-05-01
Summaries are given of research progress in the following areas: (1) measurements of cross sections and related quantities, (2) cross section evaluations and theory, (3) cross section processing, testing, and sensitivity analysis, (4) integral experiments and their analyses, (5) development of methods for shield and reactor analyses, (6) analyses for specific systems or applications, and (7) information analysis and distribution. (SDF)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1993-08-01
Before disposing of transuranic radioactive waste in the Waste Isolation Pilot Plant (WIPP), the United States Department of Energy (DOE) must evaluate compliance with applicable long-term regulations of the United States Environmental Protection Agency (EPA). Sandia National Laboratories is conducting iterative performance assessments (PAs) of the WIPP for the DOE to provide interim guidance while preparing for a final compliance evaluation. This volume of the 1992 PA contains results of uncertainty and sensitivity analyses with respect to migration of gas and brine from the undisturbed repository. Additional information about the 1992 PA is provided in other volumes. Volume 1 containsmore » an overview of WIPP PA and results of a preliminary comparison with 40 CFR 191, Subpart B. Volume 2 describes the technical basis for the performance assessment, including descriptions of the linked computational models used in the Monte Carlo analyses. Volume 3 contains the reference data base and values for input parameters used in consequence and probability modeling. Volume 4 contains uncertainty and sensitivity analyses with respect to the EPA`s Environmental Standards for the Management and Disposal of Spent Nuclear Fuel, High-Level and Transuranic Radioactive Wastes (40 CFR 191, Subpart B). Finally, guidance derived from the entire 1992 PA is presented in Volume 6. Results of the 1992 uncertainty and sensitivity analyses indicate that, conditional on the modeling assumptions and the assigned parameter-value distributions, the most important parameters for which uncertainty has the potential to affect gas and brine migration from the undisturbed repository are: initial liquid saturation in the waste, anhydrite permeability, biodegradation-reaction stoichiometry, gas-generation rates for both corrosion and biodegradation under inundated conditions, and the permeability of the long-term shaft seal.« less
ERIC Educational Resources Information Center
Mortensen, Jennifer A.; Barnett, Melissa A.
2018-01-01
Research Findings: This study examined the transactional nature of harsh parenting and emotion regulation across toddlerhood, including the moderating role of teacher sensitivity in child care. Secondary data analyses were conducted with a subsample of families from the Early Head Start Research and Evaluation Project who participated in…
Between- and within-lake responses of macrophyte richness metrics to shoreline developmen
Beck, Marcus W.; Vondracek, Bruce C.; Hatch, Lorin K.
2013-01-01
Aquatic habitat in littoral environments can be affected by residential development of shoreline areas. We evaluated the relationship between macrophyte richness metrics and shoreline development to quantify indicator response at 2 spatial scales for Minnesota lakes. First, the response of total, submersed, and sensitive species to shoreline development was evaluated within lakes to quantify macrophyte response as a function of distance to the nearest dock. Within-lake analyses using generalized linear mixed models focused on 3 lakes of comparable size with a minimal influence of watershed land use. Survey points farther from docks had higher total species richness and presence of species sensitive to disturbance. Second, between-lake effects of shoreline development on total, submersed, emergent-floating, and sensitive species were evaluated for 1444 lakes. Generalized linear models were developed for all lakes and stratified subsets to control for lake depth and watershed land use. Between-lake analyses indicated a clear response of macrophyte richness metrics to increasing shoreline development, such that fewer emergent-floating and sensitive species were correlated with increasing density of docks. These trends were particularly evident for deeper lakes with lower watershed development. Our results provide further evidence that shoreline development is associated with degraded aquatic habitat, particularly by illustrating the response of macrophyte richness metrics across multiple lake types and different spatial scales.
Cohen, Jérémie F; Korevaar, Daniël A; Wang, Junfeng; Leeflang, Mariska M; Bossuyt, Patrick M
2016-09-01
To evaluate changes over time in summary estimates from meta-analyses of diagnostic accuracy studies. We included 48 meta-analyses from 35 MEDLINE-indexed systematic reviews published between September 2011 and January 2012 (743 diagnostic accuracy studies; 344,015 participants). Within each meta-analysis, we ranked studies by publication date. We applied random-effects cumulative meta-analysis to follow how summary estimates of sensitivity and specificity evolved over time. Time trends were assessed by fitting a weighted linear regression model of the summary accuracy estimate against rank of publication. The median of the 48 slopes was -0.02 (-0.08 to 0.03) for sensitivity and -0.01 (-0.03 to 0.03) for specificity. Twelve of 96 (12.5%) time trends in sensitivity or specificity were statistically significant. We found a significant time trend in at least one accuracy measure for 11 of the 48 (23%) meta-analyses. Time trends in summary estimates are relatively frequent in meta-analyses of diagnostic accuracy studies. Results from early meta-analyses of diagnostic accuracy studies should be considered with caution. Copyright © 2016 Elsevier Inc. All rights reserved.
Evaluating trade-offs in bull trout reintroduction strategies using structured decision making
Brignon, William R.; Peterson, James T.; Dunham, Jason B.; Schaller, Howard A.; Schreck, Carl B.
2018-01-01
Structured decision making allows reintroduction decisions to be made despite uncertainty by linking reintroduction goals with alternative management actions through predictive models of ecological processes. We developed a decision model to evaluate the trade-offs between six bull trout (Salvelinus confluentus) reintroduction decisions with the goal of maximizing the number of adults in the recipient population without reducing the donor population to an unacceptable level. Sensitivity analyses suggested that the decision identity and outcome were most influenced by survival parameters that result in increased adult abundance in the recipient population, increased juvenile survival in the donor and recipient populations, adult fecundity rates, and sex ratio. The decision was least sensitive to survival parameters associated with the captive-reared population, the effect of naivety on released individuals, and juvenile carrying capacity of the reintroduced population. The model and sensitivity analyses can serve as the foundation for formal adaptive management and improved effectiveness, efficiency, and transparency of bull trout reintroduction decisions.
This report provides detailed comparisons and sensitivity analyses of three candidate models, MESOPLUME, MESOPUFF, and MESOGRID. This was not a validation study; there was no suitable regional air quality data base for the Four Corners area. Rather, the models have been evaluated...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Douglas W. Akers; Edwin A. Harvego
2012-08-01
This paper presents the results of a study to evaluate the feasibility of remotely detecting and quantifying fuel relocation from the core to the lower head, and to regions outside the reactor vessel primary containment of the Fukushima 1-3 reactors. The goals of this study were to determine measurement conditions and requirements, and to perform initial radiation transport sensitivity analyses for several potential measurement locations inside the reactor building. The radiation transport sensitivity analyses were performed based on reactor design information for boiling water reactors (BWRs) similar to the Fukushima reactors, ORIGEN2 analyses of 3-cycle BWR fuel inventories, and datamore » on previously molten fuel characteristics from TMI- 2. A 100 kg mass of previously molten fuel material located on the lower head of the reactor vessel was chosen as a fuel interrogation sensitivity target. Two measurement locations were chosen for the transport analyses, one inside the drywell and one outside the concrete biological shield surrounding the drywell. Results of these initial radiation transport analyses indicate that the 100 kg of previously molten fuel material may be detectable at the measurement location inside the drywell, but that it is highly unlikely that any amount of fuel material inside the RPV will be detectable from a location outside the concrete biological shield surrounding the drywell. Three additional fuel relocation scenarios were also analyzed to assess detection sensitivity for varying amount of relocated material in the lower head of the reactor vessel, in the control rods perpendicular to the detector system, and on the lower head of the drywell. Results of these analyses along with an assessment of background radiation effects and a discussion of measurement issues, such as the detector/collimator design, are included in the paper.« less
Sensitivity of water resources in the Delaware River basin to climate variability and change
Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.
1994-01-01
Because of the greenhouse effect, projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climate change; and presents the results of sensitivity analyses of how climate change might affect water resources in the Delaware River basin. Sensitivity analyses suggest that potentially serious shortfalls of certain water resources in the basin could result if some scenarios for climate change come true . The results of model simulations of the basin streamflow demonstrate the difficulty in distinguishing the effects that climate change versus natural climate variability have on streamflow and water supply . The future direction of basin changes in most water resources, furthermore, cannot be precisely determined because of uncertainty in current projections of regional temperature and precipitation . This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant . The sensitivity analyses could be useful in developing contingency plans for evaluating and responding to changes, should they occur.
[Screening for cancer - economic consideration and cost-effectiveness].
Kjellberg, Jakob
2014-06-09
Cost-effectiveness analysis has become an accepted method to evaluate medical technology and allocate scarce health-care resources. Published decision analyses show that screening for cancer in general is cost-effective. However, cost-effectiveness analyses are only as good as the clinical data and the results are sensitive to the chosen methods and perspective of the analysis.
EVALUATION AND SENSITIVITY ANALYSES RESULTS OF THE MESOPUFF II MODEL WITH CAPTEX MEASUREMENTS
The MESOPUFF II regional Lagrangian puff model has been evaluated and tested against measurements from the Cross-Appalachian Tracer Experiment (CAPTEX) data base in an effort to assess its abilIty to simulate the transport and dispersion of a nonreactive, nondepositing tracer plu...
Wilson, Koo; Hettle, Robert; Marbaix, Sophie; Diaz Cerezo, Silvia; Ines, Monica; Santoni, Laura; Annemans, Lieven; Prignot, Jacques; Lopez de Sa, Esteban
2012-10-01
An estimated 17.2% of patients continue to smoke following diagnosis of cardiovascular disease (CVD). To reduce the risk of further morbidity or mortality in cardiovascular patients, smoking cessation has been shown to reduce the risk of mortality by 36% and myocardial infarction by 32%. The objective of this study was to evaluate the long-term health and economic consequences of smoking cessation in patients with CVD. Results of a randomized clinical trial comparing varenicline plus counselling vs. placebo plus counselling were extrapolated using a Markov model to simulate the lifetime costs and health consequences of smoking cessation in patients with stable CVD. For the base case, we considered a payer's perspective including direct costs attributed to the healthcare provider, measuring cumulative life years (LY) and quality adjusted life (QALY) years as outcome measures. Secondary analyses were conducted from a societal perspective, evaluating lost productivity due to premature mortality. Sensitivity and subgroup analyses were also undertaken. Results were analysed for Belgium, Spain, Portugal, and Italy. Varenicline plus counselling was associated with a gain in LY and QALY across all countries; relative to placebo plus counselling. From a payer's perspective, incremental cost effectiveness ratios were € 6120 (Belgium), € 5151 (Spain), € 5357 (Portugal), and € 5433 (Italy) per QALY gained. From a societal perspective, varenicline in addition to counselling was less costly than placebo and counselling in all cases. Sensitivity analyses showed little sensitivity in outcomes to model assumptions or uncertainty in model parameters. Varenicline in addition to counselling is cost-effective compared to placebo and counselling in smokers with CVD.
ERIC Educational Resources Information Center
Filby, Nikola N.
The development and refinement of the measures of student achievement in reading and mathematics for the Beginning Teacher Evaluation Study are described. The concept of reactivity to instruction is introduced: the tests used to evaluate instructional processes must be sensitive indicators of classroom learning overtime. Data collection activities…
Edwards, D. L.; Saleh, A. A.; Greenspan, S. L.
2015-01-01
Summary We performed a systematic review and meta-analysis of the performance of clinical risk assessment instruments for screening for DXA-determined osteoporosis or low bone density. Commonly evaluated risk instruments showed high sensitivity approaching or exceeding 90 % at particular thresholds within various populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. Introduction The purpose of the study is to systematically review the performance of clinical risk assessment instruments for screening for dual-energy X-ray absorptiometry (DXA)-determined osteoporosis or low bone density. Methods Systematic review and meta-analysis were performed. Multiple literature sources were searched, and data extracted and analyzed from included references. Results One hundred eight references met inclusion criteria. Studies assessed many instruments in 34 countries, most commonly the Osteoporosis Self-Assessment Tool (OST), the Simple Calculated Osteoporosis Risk Estimation (SCORE) instrument, the Osteoporosis Self-Assessment Tool for Asians (OSTA), the Osteoporosis Risk Assessment Instrument (ORAI), and body weight criteria. Meta-analyses of studies evaluating OST using a cutoff threshold of <1 to identify US postmenopausal women with osteoporosis at the femoral neck provided summary sensitivity and specificity estimates of 89 % (95%CI 82–96 %) and 41 % (95%CI 23–59 %), respectively. Meta-analyses of studies evaluating OST using a cutoff threshold of 3 to identify US men with osteoporosis at the femoral neck, total hip, or lumbar spine provided summary sensitivity and specificity estimates of 88 % (95%CI 79–97 %) and 55 % (95%CI 42–68 %), respectively. Frequently evaluated instruments each had thresholds and populations for which sensitivity for osteoporosis or low bone mass detection approached or exceeded 90 % but always with a trade-off of relatively low specificity. Conclusions Commonly evaluated clinical risk assessment instruments each showed high sensitivity approaching or exceeding 90 % for identifying individuals with DXA-determined osteoporosis or low BMD at certain thresholds in different populations but low specificity at thresholds required for high sensitivity. Simpler instruments, such as OST, generally performed as well as or better than more complex instruments. PMID:25644147
Wysokińska, A.; Kondracki, S.; Iwanina, M.
2015-01-01
The present work describes experiments undertaken to evaluate the usefulness of selected physicochemical indices of semen, cell membrane integrity and sperm chromatin structure for the assessment of boar semen sensitivity to processes connected with pre-insemination procedures. The experiments were carried out on 30 boars: including 15 regarded as providers of sensitive semen and 15 regarded as providers of semen that is little sensitive to laboratory processing. The selection of boars for both groups was based on sperm morphology analyses, assuming secondary morphological change incidence in spermatozoa as the criterion. Two ejaculates were manually collected from each boar at an interval of 3 to 4 months. The following analyses were carried out for each ejaculate: sperm motility assessment, sperm pH measurement, sperm morphology assessment, sperm chromatin structure evaluation and cell membrane integrity assessment. The analyses were performed three times. Semen storage did not cause an increase in the incidence of secondary morphological changes in the group of boars considered to provide sperm of low sensitivity. On the other hand, with continued storage there was a marked increase in the incidence of spermatozoa with secondary morphological changes in the group of boars regarded as producing more sensitive semen. Ejaculates of group I boars evaluated directly after collection had an approximately 6% smaller share of spermatozoa with undamaged cell membranes than the ejaculates of boars in group II (p≤0.05). In the process of time the percentage of spermatozoa with undamaged cell membranes decreased. The sperm of group I boars was characterised with a lower sperm motility than the semen of group II boars. After 1 hour of storing diluted semen, the sperm motility of boars producing highly sensitive semen was already 4% lower (p≤0.05), and after 24 hours of storage it was 6.33% lower than that of the boars that produced semen with a low sensitivity. Factors that confirm the accuracy of insemination male selection can include a low rate of sperm motility decrease during the storage of diluted semen, low and contained incidence of secondary morphological changes in spermatozoa during semen storage and a high frequency of spermatozoa with undamaged cell membranes. PMID:26580438
Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki
2015-02-18
Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices.
Techno-Economic Evaluation of Biodiesel Production from Waste Cooking Oil—A Case Study of Hong Kong
Karmee, Sanjib Kumar; Patria, Raffel Dharma; Lin, Carol Sze Ki
2015-01-01
Fossil fuel shortage is a major challenge worldwide. Therefore, research is currently underway to investigate potential renewable energy sources. Biodiesel is one of the major renewable energy sources that can be obtained from oils and fats by transesterification. However, biodiesel obtained from vegetable oils as feedstock is expensive. Thus, an alternative and inexpensive feedstock such as waste cooking oil (WCO) can be used as feedstock for biodiesel production. In this project, techno-economic analyses were performed on the biodiesel production in Hong Kong using WCO as a feedstock. Three different catalysts such as acid, base, and lipase were evaluated for the biodiesel production from WCO. These economic analyses were then compared to determine the most cost-effective method for the biodiesel production. The internal rate of return (IRR) sensitivity analyses on the WCO price and biodiesel price variation are performed. Acid was found to be the most cost-effective catalyst for the biodiesel production; whereas, lipase was the most expensive catalyst for biodiesel production. In the IRR sensitivity analyses, the acid catalyst can also acquire acceptable IRR despite the variation of the WCO and biodiesel prices. PMID:25809602
Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc
2018-05-01
Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Comprehensive Evaluation of the Contribution of X Chromosome Genes to Platinum Sensitivity
Gamazon, Eric R.; Im, Hae Kyung; O’Donnell, Peter H.; Ziliak, Dana; Stark, Amy L.; Cox, Nancy J.; Dolan, M. Eileen; Huang, Rong Stephanie
2011-01-01
Utilizing a genome-wide gene expression dataset generated from Affymetrix GeneChip® Human Exon 1.0ST array, we comprehensively surveyed the role of 322 X chromosome gene expression traits on cellular sensitivity to cisplatin and carboplatin. We identified 31 and 17 X chromosome genes whose expression levels are significantly correlated (after multiple testing correction) with sensitivity to carboplatin and cisplatin, respectively, in the combined HapMap CEU and YRI populations (false discovery rate, FDR<0.05). Of those, 14 overlap for both cisplatin and carboplatin. Employing an independent gene expression quantification method, the Illumina Sentrix Human-6 Expression BeadChip, measured on the same HapMap cell lines, we found that 4 and 2 of these genes are significantly associated with carboplatin and cisplatin sensitivity respectively in both analyses. Two genes, CTPS2 and DLG3, were identified by both genome-wide gene expression analyses as correlated with cellular sensitivity to both platinating agents. The expression of DLG3 gene was also found to correlate with cellular sensitivity to platinating agents in NCI60 cancer cell lines. In addition, we evaluated the role of X chromosome gene expression to the observed differences in sensitivity to the platinums between CEU and YRI derived cell lines. Of the 34 distinct genes significantly correlated with either carboplatin or cisplatin sensitivity, 14 are differentially expressed (defined as p<0.05) between CEU and YRI. Thus, sex chromosome genes play a role in cellular sensitivity to platinating agents and differences in the expression level of these genes are an important source of variation that should be included in comprehensive pharmacogenomic studies. PMID:21252287
Comparison between two methodologies for urban drainage decision aid.
Moura, P M; Baptista, M B; Barraud, S
2006-01-01
The objective of the present work is to compare two methodologies based on multicriteria analysis for the evaluation of stormwater systems. The first methodology was developed in Brazil and is based on performance-cost analysis, the second one is ELECTRE III. Both methodologies were applied to a case study. Sensitivity and robustness analyses were then carried out. These analyses demonstrate that both methodologies have equivalent results, and present low sensitivity and high robustness. These results prove that the Brazilian methodology is consistent and can be used safely in order to select a good solution or a small set of good solutions that could be compared with more detailed methods afterwards.
Chomsky-Higgins, Kathryn; Seib, Carolyn; Rochefort, Holly; Gosnell, Jessica; Shen, Wen T; Kahn, James G; Duh, Quan-Yang; Suh, Insoo
2018-01-01
Guidelines for management of small adrenal incidentalomas are mutually inconsistent. No cost-effectiveness analysis has been performed to evaluate rigorously the relative merits of these strategies. We constructed a decision-analytic model to evaluate surveillance strategies for <4cm, nonfunctional, benign-appearing adrenal incidentalomas. We evaluated 4 surveillance strategies: none, one-time, annual for 2 years, and annual for 5 years. Threshold and sensitivity analyses assessed robustness of the model. Costs were represented in 2016 US dollars and health outcomes in quality-adjusted life-years. No surveillance has an expected net cost of $262 and 26.22 quality-adjusted life-years. One-time surveillance costs $158 more and adds 0.2 quality-adjusted life-years for an incremental cost-effectiveness ratio of $778/quality-adjusted life-years. The strategies involving more surveillance were dominated by the no surveillance and one-time surveillance strategies less effective and more expensive. Above a 0.7% prevalence of adrenocortical carcinoma, one-time surveillance was the most effective strategy. The results were robust to all sensitivity analyses of disease prevalence, sensitivity, and specificity of diagnostic assays and imaging as well as health state utility. For patients with a < 4cm, nonfunctional, benign-appearing mass, one-time follow-up evaluation involving a noncontrast computed tomography and biochemical evaluation is cost-effective. Strategies requiring more surveillance accrue more cost without incremental benefit. Copyright © 2017 Elsevier Inc. All rights reserved.
The Child Adolescent Bullying Scale (CABS): Psychometric evaluation of a new measure.
Strout, Tania D; Vessey, Judith A; DiFazio, Rachel L; Ludlow, Larry H
2018-06-01
While youth bullying is a significant public health problem, healthcare providers have been limited in their ability to identify bullied youths due to the lack of a reliable, and valid instrument appropriate for use in clinical settings. We conducted a multisite study to evaluate the psychometric properties of a new 22-item instrument for assessing youths' experiences of being bullied, the Child Adolescent Bullying Scale (CABS). The 20 items summed to produce the measure's score were evaluated here. Diagnostic performance was assessed through evaluation of sensitivity, specificity, predictive values, and area under receiver operating characteristic (AUROC) curve. A sample of 352 youths from diverse racial, ethnic, and geographic backgrounds (188 female, 159 male, 5 transgender, sample mean age 13.5 years) were recruited from two clinical sites. Participants completed the CABS and existing youth bullying measures. Analyses grounded in classical test theory, including assessments of reliability and validity, item analyses, and principal components analysis, were conducted. The diagnostic performance and test characteristics of the CABS were also evaluated. The CABS is comprised of one component, accounting for 67% of observed variance. Analyses established evidence of internal consistency reliability (Cronbach's α = 0.97), construct and convergent validity. Sensitivity was 84%, specificity was 65%, and the AUROC curve was 0.74 (95% CI: 0.69-0.80). Findings suggest that the CABS holds promise as a reliable, valid tool for healthcare provider use in screening for bullying exposure in the clinical setting. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Cuntz, Matthias; Mai, Juliane; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2015-08-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
NASA Astrophysics Data System (ADS)
Mai, Juliane; Cuntz, Matthias; Zink, Matthias; Thober, Stephan; Kumar, Rohini; Schäfer, David; Schrön, Martin; Craven, John; Rakovec, Oldrich; Spieler, Diana; Prykhodko, Vladyslav; Dalmasso, Giovanni; Musuuza, Jude; Langenberg, Ben; Attinger, Sabine; Samaniego, Luis
2016-04-01
Environmental models tend to require increasing computational time and resources as physical process descriptions are improved or new descriptions are incorporated. Many-query applications such as sensitivity analysis or model calibration usually require a large number of model evaluations leading to high computational demand. This often limits the feasibility of rigorous analyses. Here we present a fully automated sequential screening method that selects only informative parameters for a given model output. The method requires a number of model evaluations that is approximately 10 times the number of model parameters. It was tested using the mesoscale hydrologic model mHM in three hydrologically unique European river catchments. It identified around 20 informative parameters out of 52, with different informative parameters in each catchment. The screening method was evaluated with subsequent analyses using all 52 as well as only the informative parameters. Subsequent Sobol's global sensitivity analysis led to almost identical results yet required 40% fewer model evaluations after screening. mHM was calibrated with all and with only informative parameters in the three catchments. Model performances for daily discharge were equally high in both cases with Nash-Sutcliffe efficiencies above 0.82. Calibration using only the informative parameters needed just one third of the number of model evaluations. The universality of the sequential screening method was demonstrated using several general test functions from the literature. We therefore recommend the use of the computationally inexpensive sequential screening method prior to rigorous analyses on complex environmental models.
Goodman, Julie E; Loftus, Christine T; Zu, Ke
2015-08-01
Despite evidence from experimental studies indicating that the herbicide, 2,4-dichlorophenoxyacetic acid (2,4-D), is not carcinogenic, several epidemiology studies have evaluated links between 2,4-D and cancer. Some suggest that 2,4-D is associated with non-Hodgkin's lymphoma (NHL), gastric cancer, and prostate cancer, but results have been inconsistent. We conducted meta-analyses to evaluate the weight of epidemiology evidence for these cancers. We identified articles from PubMed, Scopus, and TOXLINE databases and reference lists of review articles. We evaluated study quality and calculated summary risk estimates using random effects models. We conducted subgroup and sensitivity analyses when possible. We identified nine NHL, three gastric cancer, and two prostate cancer studies for inclusion in our meta-analyses. We found that 2,4-D was not associated with NHL (relative risk [RR] = 0.97, 95% confidence interval [CI] = 0.77-1.22, I(2) = 28.8%, Pheterogeneity = .19), and this result was generally robust to subgroup and sensitivity analyses. 2,4-D was not associated with gastric (RR = 1.14, 95% CI = 0.62-2.10, I(2) = 54.9%, Pheterogeneity = .11) or prostate cancer (RR = 1.32, 95% CI = 0.37-4.69, I(2) 87.0%, Pheterogeneity = .01). The epidemiology evidence does not support an association between 2,4-D and NHL, gastric cancer, or prostate cancer risk. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Local influence for generalized linear models with missing covariates.
Shi, Xiaoyan; Zhu, Hongtu; Ibrahim, Joseph G
2009-12-01
In the analysis of missing data, sensitivity analyses are commonly used to check the sensitivity of the parameters of interest with respect to the missing data mechanism and other distributional and modeling assumptions. In this article, we formally develop a general local influence method to carry out sensitivity analyses of minor perturbations to generalized linear models in the presence of missing covariate data. We examine two types of perturbation schemes (the single-case and global perturbation schemes) for perturbing various assumptions in this setting. We show that the metric tensor of a perturbation manifold provides useful information for selecting an appropriate perturbation. We also develop several local influence measures to identify influential points and test model misspecification. Simulation studies are conducted to evaluate our methods, and real datasets are analyzed to illustrate the use of our local influence measures.
MEDICAL DEVICE PRICES IN ECONOMIC EVALUATIONS.
Akpinar, Ilke; Jacobs, Philip; Husereau, Don
2015-01-01
Economic evaluations, although not formally used in purchasing decisions for medical devices in Canada, are still being conducted and published. The aim of this study was to examine the way that prices have been included in Canadian economic evaluations of medical devices. We conducted a review of the economic concepts and implications of methods used for economic evaluations of the eleven most implanted medical devices from the Canadian perspective. We found Canadian economic studies for five of the eleven medical devices and identified nineteen Canadian studies. Overall, the device costs were important components of total procedure cost, with an average ratio of 44.1 %. Observational estimates of the device costs were obtained from buyers or sellers in 13 of the 19 studies. Although most of the devices last more than 1 year, standard costing methods for capital equipment was never used. In addition, only eight studies included a sensitivity analysis for the device cost. None of the sensitivity analyses were based on actual price distributions. Economic evaluations are potentially important for policy making, but although they are being conducted, there is no standardized approach for incorporating medical device prices in economic analyses. Our review provides suggestions for improvements in how the prices are incorporated for economic evaluations of medical devices.
McClendon, Debra T; Warren, Jared S; Green, Katherine M; Burlingame, Gary M; Eggett, Dennis L; McClendon, Richard J
2011-01-01
This study evaluated the relative sensitivity to change of the Child Behavior Checklist/6-18 (CBCL), the Behavior Assessment System for Children-2 (BASC-2), and the Youth Outcome Questionnaire 2.01 (Y-OQ). Participants were 134 parents and 44 adolescents receiving routine outpatient services in a community mental health system. Hierarchical linear modeling analyses were used to examine change trajectories for the 3 measures across 3 groups: parent informants, parent and adolescent dyads, and adolescent informants. Results indicated that for parent-report measures, the Y-OQ was most change sensitive; the BASC-2 and CBCL were not statistically different from each other. Significant differences in change sensitivity were not observed for youth self-report of symptoms. Results suggest that the Y-OQ may be particularly useful for evaluating change in overall psychosocial functioning in children and adolescents. © 2010 Wiley Periodicals, Inc.
Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu
2006-11-01
Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.
Methods for comparative evaluation of propulsion system designs for supersonic aircraft
NASA Technical Reports Server (NTRS)
Tyson, R. M.; Mairs, R. Y.; Halferty, F. D., Jr.; Moore, B. E.; Chaloff, D.; Knudsen, A. W.
1976-01-01
The propulsion system comparative evaluation study was conducted to define a rapid, approximate method for evaluating the effects of propulsion system changes for an advanced supersonic cruise airplane, and to verify the approximate method by comparing its mission performance results with those from a more detailed analysis. A table look up computer program was developed to determine nacelle drag increments for a range of parametric nacelle shapes and sizes. Aircraft sensitivities to propulsion parameters were defined. Nacelle shapes, installed weights, and installed performance was determined for four study engines selected from the NASA supersonic cruise aircraft research (SCAR) engine studies program. Both rapid evaluation method (using sensitivities) and traditional preliminary design methods were then used to assess the four engines. The method was found to compare well with the more detailed analyses.
Mafe, Oluwakemi A T; Davies, Scott M; Hancock, John; Du, Chenyu
2015-01-01
This study aims to develop a mathematical model to evaluate the energy required by pretreatment processes used in the production of second generation ethanol. A dilute acid pretreatment process reported by National Renewable Energy Laboratory (NREL) was selected as an example for the model's development. The energy demand of the pretreatment process was evaluated by considering the change of internal energy of the substances, the reaction energy, the heat lost and the work done to/by the system based on a number of simplifying assumptions. Sensitivity analyses were performed on the solid loading rate, temperature, acid concentration and water evaporation rate. The results from the sensitivity analyses established that the solids loading rate had the most significant impact on the energy demand. The model was then verified with data from the NREL benchmark process. Application of this model on other dilute acid pretreatment processes reported in the literature illustrated that although similar sugar yields were reported by several studies, the energy required by the different pretreatments varied significantly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, Huiying; Hou, Zhangshuan; Huang, Maoyi
The Community Land Model (CLM) represents physical, chemical, and biological processes of the terrestrial ecosystems that interact with climate across a range of spatial and temporal scales. As CLM includes numerous sub-models and associated parameters, the high-dimensional parameter space presents a formidable challenge for quantifying uncertainty and improving Earth system predictions needed to assess environmental changes and risks. This study aims to evaluate the potential of transferring hydrologic model parameters in CLM through sensitivity analyses and classification across watersheds from the Model Parameter Estimation Experiment (MOPEX) in the United States. The sensitivity of CLM-simulated water and energy fluxes to hydrologicalmore » parameters across 431 MOPEX basins are first examined using an efficient stochastic sampling-based sensitivity analysis approach. Linear, interaction, and high-order nonlinear impacts are all identified via statistical tests and stepwise backward removal parameter screening. The basins are then classified accordingly to their parameter sensitivity patterns (internal attributes), as well as their hydrologic indices/attributes (external hydrologic factors) separately, using a Principal component analyses (PCA) and expectation-maximization (EM) –based clustering approach. Similarities and differences among the parameter sensitivity-based classification system (S-Class), the hydrologic indices-based classification (H-Class), and the Koppen climate classification systems (K-Class) are discussed. Within each S-class with similar parameter sensitivity characteristics, similar inversion modeling setups can be used for parameter calibration, and the parameters and their contribution or significance to water and energy cycling may also be more transferrable. This classification study provides guidance on identifiable parameters, and on parameterization and inverse model design for CLM but the methodology is applicable to other models. Inverting parameters at representative sites belonging to the same class can significantly reduce parameter calibration efforts.« less
Does McNemar's test compare the sensitivities and specificities of two diagnostic tests?
Kim, Soeun; Lee, Woojoo
2017-02-01
McNemar's test is often used in practice to compare the sensitivities and specificities for the evaluation of two diagnostic tests. For correct evaluation of accuracy, an intuitive recommendation is to test the diseased and the non-diseased groups separately so that the sensitivities can be compared among the diseased, and specificities can be compared among the healthy group of people. This paper provides a rigorous theoretical framework for this argument and study the validity of McNemar's test regardless of the conditional independence assumption. We derive McNemar's test statistic under the null hypothesis considering both assumptions of conditional independence and conditional dependence. We then perform power analyses to show how the result is affected by the amount of the conditional dependence under alternative hypothesis.
Variation of a test's sensitivity and specificity with disease prevalence.
Leeflang, Mariska M G; Rutjes, Anne W S; Reitsma, Johannes B; Hooft, Lotty; Bossuyt, Patrick M M
2013-08-06
Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. We used data from 23 meta-analyses, each of which included 10-39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation.
Variation of a test’s sensitivity and specificity with disease prevalence
Leeflang, Mariska M.G.; Rutjes, Anne W.S.; Reitsma, Johannes B.; Hooft, Lotty; Bossuyt, Patrick M.M.
2013-01-01
Background: Anecdotal evidence suggests that the sensitivity and specificity of a diagnostic test may vary with disease prevalence. Our objective was to investigate the associations between disease prevalence and test sensitivity and specificity using studies of diagnostic accuracy. Methods: We used data from 23 meta-analyses, each of which included 10–39 studies (416 total). The median prevalence per review ranged from 1% to 77%. We evaluated the effects of prevalence on sensitivity and specificity using a bivariate random-effects model for each meta-analysis, with prevalence as a covariate. We estimated the overall effect of prevalence by pooling the effects using the inverse variance method. Results: Within a given review, a change in prevalence from the lowest to highest value resulted in a corresponding change in sensitivity or specificity from 0 to 40 percentage points. This effect was statistically significant (p < 0.05) for either sensitivity or specificity in 8 meta-analyses (35%). Overall, specificity tended to be lower with higher disease prevalence; there was no such systematic effect for sensitivity. Interpretation: The sensitivity and specificity of a test often vary with disease prevalence; this effect is likely to be the result of mechanisms, such as patient spectrum, that affect prevalence, sensitivity and specificity. Because it may be difficult to identify such mechanisms, clinicians should use prevalence as a guide when selecting studies that most closely match their situation. PMID:23798453
Bardach, Ariel Esteban; Garay, Osvaldo Ulises; Calderón, María; Pichón-Riviére, Andrés; Augustovski, Federico; Martí, Sebastián García; Cortiñas, Paula; Gonzalez, Marino; Naranjo, Laura T; Gomez, Jorge Alberto; Caporale, Joaquín Enzo
2017-02-02
Cervical cancer (CC) and genital warts (GW) are a significant public health issue in Venezuela. Our objective was to assess the cost-effectiveness of the two available vaccines, bivalent and quadrivalent, against Human Papillomavirus (HPV) in Venezuelan girls in order to inform decision-makers. A previously published Markov cohort model, informed by the best available evidence, was adapted to the Venezuelan context to evaluate the effects of vaccination on health and healthcare costs from the perspective of the healthcare payer in an 11-year-old girls cohort of 264,489. Costs and quality-adjusted life years (QALYs) were discounted at 5%. Eight scenarios were analyzed to depict the cost-effectiveness under alternative vaccine prices, exchange rates and dosing schemes. Deterministic and probabilistic sensitivity analyses were performed. Compared to screening only, the bivalent and quadrivalent vaccines were cost-saving in all scenarios, avoiding 2,310 and 2,143 deaths, 4,781 and 4,431 CCs up to 18,459 GW for the quadrivalent vaccine and gaining 4,486 and 4,395 discounted QALYs respectively. For both vaccines, the main determinants of variations in the incremental costs-effectiveness ratio after running deterministic and probabilistic sensitivity analyses were transition probabilities, vaccine and cancer-treatment costs and HPV 16 and 18 distribution in CC cases. When comparing vaccines, none of them was consistently more cost-effective than the other. In sensitivity analyses, for these comparisons, the main determinants were GW incidence, the level of cross-protection and, for some scenarios, vaccines costs. Immunization with the bivalent or quadrivalent HPV vaccines showed to be cost-saving or cost-effective in Venezuela, falling below the threshold of one Gross Domestic Product (GDP) per capita (104,404 VEF) per QALY gained. Deterministic and probabilistic sensitivity analyses confirmed the robustness of these results.
Colyar, Jessica M; Eggett, Dennis L; Steele, Frost M; Dunn, Michael L; Ogden, Lynn V
2009-09-01
The relative sensitivity of side-by-side and sequential monadic consumer liking protocols was compared. In the side-by-side evaluation, all samples were presented at once and evaluated together 1 characteristic at a time. In the sequential monadic evaluation, 1 sample was presented and evaluated on all characteristics, then returned before panelists received and evaluated another sample. Evaluations were conducted on orange juice, frankfurters, canned chili, potato chips, and applesauce. Five commercial brands, having a broad quality range, were selected as samples for each product category to assure a wide array of consumer liking scores. Without their knowledge, panelists rated the same 5 retail brands by 1 protocol and then 3 wk later by the other protocol. For 3 of the products, both protocols yielded the same order of overall liking. Slight differences in order of overall liking for the other 2 products were not significant. Of the 50 pairwise overall liking comparisons, 44 were in agreement. The different results obtained by the 2 protocols in order of liking and significance of paired comparisons were due to the experimental variation and differences in sensitivity. Hedonic liking scores were subjected to statistical power analyses and used to calculate minimum number of panelists required to achieve varying degrees of sensitivity when using side-by-side and sequential monadic protocols. In most cases, the side-by-side protocol was more sensitive, thus providing the same information with fewer panelists. Side-by-side protocol was less sensitive in cases where sensory fatigue was a factor.
Johnson, Raymond H.
2007-01-01
In mountain watersheds, the increased demand for clean water resources has led to an increased need for an understanding of ground water flow in alpine settings. In Prospect Gulch, located in southwestern Colorado, understanding the ground water flow system is an important first step in addressing metal loads from acid-mine drainage and acid-rock drainage in an area with historical mining. Ground water flow modeling with sensitivity analyses are presented as a general tool to guide future field data collection, which is applicable to any ground water study, including mountain watersheds. For a series of conceptual models, the observation and sensitivity capabilities of MODFLOW-2000 are used to determine composite scaled sensitivities, dimensionless scaled sensitivities, and 1% scaled sensitivity maps of hydraulic head. These sensitivities determine the most important input parameter(s) along with the location of observation data that are most useful for future model calibration. The results are generally independent of the conceptual model and indicate recharge in a high-elevation recharge zone as the most important parameter, followed by the hydraulic conductivities in all layers and recharge in the next lower-elevation zone. The most important observation data in determining these parameters are hydraulic heads at high elevations, with a depth of less than 100 m being adequate. Evaluation of a possible geologic structure with a different hydraulic conductivity than the surrounding bedrock indicates that ground water discharge to individual stream reaches has the potential to identify some of these structures. Results of these sensitivity analyses can be used to prioritize data collection in an effort to reduce time and money spend by collecting the most relevant model calibration data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peace, Gerald; Goering, Timothy James; Miller, Mark Laverne
2007-01-01
A probabilistic performance assessment has been conducted to evaluate the fate and transport of radionuclides (americium-241, cesium-137, cobalt-60, plutonium-238, plutonium-239, radium-226, radon-222, strontium-90, thorium-232, tritium, uranium-238), heavy metals (lead and cadmium), and volatile organic compounds (VOCs) at the Mixed Waste Landfill (MWL). Probabilistic analyses were performed to quantify uncertainties inherent in the system and models for a 1,000-year period, and sensitivity analyses were performed to identify parameters and processes that were most important to the simulated performance metrics. Comparisons between simulated results and measured values at the MWL were made to gain confidence in the models and perform calibrations whenmore » data were available. In addition, long-term monitoring requirements and triggers were recommended based on the results of the quantified uncertainty and sensitivity analyses.« less
Patel, Anik R; Kessler, Jason; Braithwaite, R Scott; Nucifora, Kimberly A; Thirumurthy, Harsha; Zhou, Qinlian; Lester, Richard T; Marra, Carlo A
2017-02-01
A surge in mobile phone availability has fueled low cost short messaging service (SMS) adherence interventions. Multiple systematic reviews have concluded that some SMS-based interventions are effective at improving antiretroviral therapy (ART) adherence, and they are hypothesized to improve retention in care. The objective of this study was to evaluate the cost-effectiveness of SMS-based adherence interventions and explore the added value of retention benefits. We evaluated the cost-effectiveness of weekly SMS interventions compared to standard care among HIV+ individuals initiating ART for the first time in Kenya. We used an individual level micro-simulation model populated with data from two SMS-intervention trials, an East-African HIV+ cohort and published literature. We estimated average quality adjusted life years (QALY) and lifetime HIV-related costs from a healthcare perspective. We explored a wide range of scenarios and assumptions in one-way and multivariate sensitivity analyses. We found that SMS-based adherence interventions were cost-effective by WHO standards, with an incremental cost-effectiveness ratio (ICER) of $1,037/QALY. In the secondary analysis, potential retention benefits improved the cost-effectiveness of SMS intervention (ICER = $864/QALY). In multivariate sensitivity analyses, the interventions remained cost-effective in most analyses, but the ICER was highly sensitive to intervention costs, effectiveness and average cohort CD4 count at ART initiation. SMS interventions remained cost-effective in a test and treat scenario where individuals were assumed to initiate ART upon HIV detection. Effective SMS interventions would likely increase the efficiency of ART programs by improving HIV treatment outcomes at relatively low costs, and they could facilitate achievement of the UNAIDS goal of 90% viral suppression among those on ART by 2020.
Wu, Bin; Yao, Yuan; Zhang, Ke; Ma, Xuezhen
2017-09-19
To test the cost-effectiveness of cetuximab plus irinotecan, fluorouracil, and leucovorin (FOLFIRI) as first-line treatment in patients with metastatic colorectal cancer (mCRC) from a Chinese medical insurance perspective. Baseline analysis showed that the addition of cetuximab increased quality-adjusted life-years (QALYs) by 0.63, an increase of $17,086 relative to FOLFIRI chemotherapy, resulting in an incremental cost-effectiveness ratio (ICER) of $27,145/QALY. When the patient assistance program (PAP) was available, the ICER decreased to $14,049/QALY, which indicated that the cetuximab is cost-effective at a willingness-to-pay threshold of China ($22,200/QALY). One-way sensitivity analyses showed that the median overall survival time for the cetuximab was the most influential parameter. A Markov model by incorporating clinical, utility and cost data was developed to evaluate the economic outcome of cetuximab in mCRC. The lifetime horizon was used, and sensitivity analyses were carried out to test the robustness of the model results. The impact of PAP was also evaluated in scenario analyses. RAS testing with cetuximab treatment is likely to be cost-effective for patients with mCRC when PAP is available in China.
The local lymph node assay and skin sensitization: a cut-down screen to reduce animal requirements?
Kimber, Ian; Dearman, Rebecca J; Betts, Catherine J; Gerberick, G Frank; Ryan, Cindy A; Kern, Petra S; Patlewicz, Grace Y; Basketter, David A
2006-04-01
The local lymph node assay (LLNA), an alternative approach to skin-sensitizing testing, has made a significant contribution to animal welfare by permitting a reduction and refinement of animal use. Although there is clearly an aspiration to eliminate the use of animals in such tests, it is appropriate also to consider other opportunities for refinement and reduction of animal use. We have therefore explored the use of a modified version of the LLNA for screening purposes when there is a need to evaluate the sensitizing activity of a large number of chemicals, as will be the case under the auspices of registration, evaluation and authorization of chemicals (REACH). Using an existing LLNA database of 211 chemicals, we have examined whether a cut-down assay comprising a single high-dose group and a concurrent vehicle control would provide a realistic approach for screening chemicals for sensitizing potential. The analyses reported here suggest this is the case. We speculate that the animal welfare benefits may be enhanced further by reducing the number of animals per experimental group. However, a detailed evaluation will be necessary to provide reassurance that a reduction in group size would provide adequate sensitivity across a range of skin sensitization potencies.
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-01-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled-Ocean-Atmosphere-Wave-Sediment Transport (COAWST) Modeling System. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant density, height, and to a certain degree, diameter. Wave dissipation is mostly dependent on the variation in plant density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance for future observational and modeling work to optimize efforts and reduce exploration of parameter space.
How often do sensitivity analyses for economic parameters change cost-utility analysis conclusions?
Schackman, Bruce R; Gold, Heather Taffet; Stone, Patricia W; Neumann, Peter J
2004-01-01
There is limited evidence about the extent to which sensitivity analysis has been used in the cost-effectiveness literature. Sensitivity analyses for health-related QOL (HR-QOL), cost and discount rate economic parameters are of particular interest because they measure the effects of methodological and estimation uncertainties. To investigate the use of sensitivity analyses in the pharmaceutical cost-utility literature in order to test whether a change in economic parameters could result in a different conclusion regarding the cost effectiveness of the intervention analysed. Cost-utility analyses of pharmaceuticals identified in a prior comprehensive audit (70 articles) were reviewed and further audited. For each base case for which sensitivity analyses were reported (n = 122), up to two sensitivity analyses for HR-QOL (n = 133), cost (n = 99), and discount rate (n = 128) were examined. Article mentions of thresholds for acceptable cost-utility ratios were recorded (total 36). Cost-utility ratios were denominated in US dollars for the year reported in each of the original articles in order to determine whether a different conclusion would have been indicated at the time the article was published. Quality ratings from the original audit for articles where sensitivity analysis results crossed the cost-utility ratio threshold above the base-case result were compared with those that did not. The most frequently mentioned cost-utility thresholds were $US20,000/QALY, $US50,000/QALY, and $US100,000/QALY. The proportions of sensitivity analyses reporting quantitative results that crossed the threshold above the base-case results (or where the sensitivity analysis result was dominated) were 31% for HR-QOL sensitivity analyses, 20% for cost-sensitivity analyses, and 15% for discount-rate sensitivity analyses. Almost half of the discount-rate sensitivity analyses did not report quantitative results. Articles that reported sensitivity analyses where results crossed the cost-utility threshold above the base-case results (n = 25) were of somewhat higher quality, and were more likely to justify their sensitivity analysis parameters, than those that did not (n = 45), but the overall quality rating was only moderate. Sensitivity analyses for economic parameters are widely reported and often identify whether choosing different assumptions leads to a different conclusion regarding cost effectiveness. Changes in HR-QOL and cost parameters should be used to test alternative guideline recommendations when there is uncertainty regarding these parameters. Changes in discount rates less frequently produce results that would change the conclusion about cost effectiveness. Improving the overall quality of published studies and describing the justifications for parameter ranges would allow more meaningful conclusions to be drawn from sensitivity analyses.
NASA Astrophysics Data System (ADS)
Clementi, Cristhian; Clementi, Francesco; Lenci, Stefano
2017-11-01
The paper discusses the behavior of typical masonry school buildings in the center of Italy built at the end of 1950s without any seismic guidelines. These structures have faced the recent Italian earthquakes in 2016 without diffuse damages. Global numerical models of the building have been built and masonry material has been simulated as nonlinear. Sensitivity analyses are done to evaluate the reliability of the structural models.
Mens, Petra F; Matelon, Raphael J; Nour, Bakri Y M; Newman, Dave M; Schallig, Henk D F H
2010-07-19
This study describes the laboratory evaluation of a novel diagnostic platform for malaria. The Magneto Optical Test (MOT) is based on the bio-physical detection of haemozoin in clinical samples. Having an assay time of around one minute, it offers the potential of high throughput screening. Blood samples of confirmed malaria patients from different regions of Africa, patients with other diseases and healthy non-endemic controls were used in the present study. The samples were analysed with two reference tests, i.e. an histidine rich protein-2 based rapid diagnostic test (RDT) and a conventional Pan-Plasmodium PCR, and the MOT as index test. Data were entered in 2 x 2 tables and analysed for sensitivity and specificity. The agreement between microscopy, RDT and PCR and the MOT assay was determined by calculating Kappa values with a 95% confidence interval. The observed sensitivity/specificity of the MOT test in comparison with clinical description, RDT or PCR ranged from 77.2 - 78.8% (sensitivity) and from 72.5 - 74.6% (specificity). In general, the agreement between MOT and the other assays is around 0.5 indicating a moderate agreement between the reference and the index test. However, when RDT and PCR are compared to each other, an almost perfect agreement can be observed (k = 0.97) with a sensitivity and specificity of >95%. Although MOT sensitivity and specificity are currently not yet at a competing level compared to other diagnostic test, such as PCR and RDTs, it has a potential to rapidly screen patients for malaria in endemic as well as non-endemic countries.
Bounthavong, Mark; Pruitt, Larry D; Smolenski, Derek J; Gahm, Gregory A; Bansal, Aasthaa; Hansen, Ryan N
2018-02-01
Introduction Home-based telebehavioural healthcare improves access to mental health care for patients restricted by travel burden. However, there is limited evidence assessing the economic value of home-based telebehavioural health care compared to in-person care. We sought to compare the economic impact of home-based telebehavioural health care and in-person care for depression among current and former US service members. Methods We performed trial-based cost-minimisation and cost-utility analyses to assess the economic impact of home-based telebehavioural health care versus in-person behavioural care for depression. Our analyses focused on the payer perspective (Department of Defense and Department of Veterans Affairs) at three months. We also performed a scenario analysis where all patients possessed video-conferencing technology that was approved by these agencies. The cost-utility analysis evaluated the impact of different depression categories on the incremental cost-effectiveness ratio. One-way and probabilistic sensitivity analyses were performed to test the robustness of the model assumptions. Results In the base case analysis the total direct cost of home-based telebehavioural health care was higher than in-person care (US$71,974 versus US$20,322). Assuming that patients possessed government-approved video-conferencing technology, home-based telebehavioural health care was less costly compared to in-person care (US$19,177 versus US$20,322). In one-way sensitivity analyses, the proportion of patients possessing personal computers was a major driver of direct costs. In the cost-utility analysis, home-based telebehavioural health care was dominant when patients possessed video-conferencing technology. Results from probabilistic sensitivity analyses did not differ substantially from base case results. Discussion Home-based telebehavioural health care is dependent on the cost of supplying video-conferencing technology to patients but offers the opportunity to increase access to care. Health-care policies centred on implementation of home-based telebehavioural health care should ensure that these technologies are able to be successfully deployed on patients' existing technology.
Lau, Brian C; Collins, Michael W; Lovell, Mark R
2011-06-01
Concussions affect an estimated 136 000 high school athletes yearly. Computerized neurocognitive testing has been shown to be appropriately sensitive and specific in diagnosing concussions, but no studies have assessed its utility to predict length of recovery. Determining prognosis during subacute recovery after sports concussion will help clinicians more confidently address return-to-play and academic decisions. To quantify the prognostic ability of computerized neurocognitive testing in combination with symptoms during the subacute recovery phase from sports-related concussion. Cohort study (prognosis); Level of evidence, 2. In sum, 108 male high school football athletes completed a computer-based neurocognitive test battery within 2.23 days of injury and were followed until returned to play as set by international guidelines. Athletes were grouped into protracted recovery (>14 days; n = 50) or short-recovery (≤14 days; n = 58). Separate discriminant function analyses were performed using total symptom score on Post-Concussion Symptom Scale, symptom clusters (migraine, cognitive, sleep, neuropsychiatric), and Immediate Postconcussion Assessment and Cognitive Testing neurocognitive scores (verbal memory, visual memory, reaction time, processing speed). Multiple discriminant function analyses revealed that the combination of 4 symptom clusters and 4 neurocognitive composite scores had the highest sensitivity (65.22%), specificity (80.36%), positive predictive value (73.17%), and negative predictive value (73.80%) in predicting protracted recovery. Discriminant function analyses of total symptoms on the Post-Concussion Symptom Scale alone had a sensitivity of 40.81%; specificity, 79.31%; positive predictive value, 62.50%; and negative predictive value, 61.33%. The 4 symptom clusters alone discriminant function analyses had a sensitivity of 46.94%; specificity, 77.20%; positive predictive value, 63.90%; and negative predictive value, 62.86%. Discriminant function analyses of the 4 computerized neurocognitive scores alone had a sensitivity of 53.20%; specificity, 75.44%; positive predictive value, 64.10%; and negative predictive value, 66.15%. The use of computerized neurocognitive testing in conjunction with symptom clusters results improves sensitivity, specificity, positive predictive value, and negative predictive value of predicting protracted recovery compared with each used alone. There is also a net increase in sensitivity of 24.41% when using neurocognitive testing and symptom clusters together compared with using total symptoms on Post-Concussion Symptom Scale alone.
Automating the evaluation of flood damages: methodology and potential gains
NASA Astrophysics Data System (ADS)
Eleutério, Julian; Martinez, Edgar Daniel
2010-05-01
The evaluation of flood damage potential consists of three main steps: assessing and processing data, combining data and calculating potential damages. The first step consists of modelling hazard and assessing vulnerability. In general, this step of the evaluation demands more time and investments than the others. The second step of the evaluation consists of combining spatial data on hazard with spatial data on vulnerability. Geographic Information System (GIS) is a fundamental tool in the realization of this step. GIS software allows the simultaneous analysis of spatial and matrix data. The third step of the evaluation consists of calculating potential damages by means of damage-functions or contingent analysis. All steps demand time and expertise. However, the last two steps must be realized several times when comparing different management scenarios. In addition, uncertainty analysis and sensitivity test are made during the second and third steps of the evaluation. The feasibility of these steps could be relevant in the choice of the extent of the evaluation. Low feasibility could lead to choosing not to evaluate uncertainty or to limit the number of scenario comparisons. Several computer models have been developed over time in order to evaluate the flood risk. GIS software is largely used to realise flood risk analysis. The software is used to combine and process different types of data, and to visualise the risk and the evaluation results. The main advantages of using a GIS in these analyses are: the possibility of "easily" realising the analyses several times, in order to compare different scenarios and study uncertainty; the generation of datasets which could be used any time in future to support territorial decision making; the possibility of adding information over time to update the dataset and make other analyses. However, these analyses require personnel specialisation and time. The use of GIS software to evaluate the flood risk requires personnel with a double professional specialisation. The professional should be proficient in GIS software and in flood damage analysis (which is already a multidisciplinary field). Great effort is necessary in order to correctly evaluate flood damages, and the updating and the improvement of the evaluation over time become a difficult task. The automation of this process should bring great advance in flood management studies over time, especially for public utilities. This study has two specific objectives: (1) show the entire process of automation of the second and third steps of flood damage evaluations; and (2) analyse the induced potential gains in terms of time and expertise needed in the analysis. A programming language is used within GIS software in order to automate hazard and vulnerability data combination and potential damages calculation. We discuss the overall process of flood damage evaluation. The main result of this study is a computational tool which allows significant operational gains on flood loss analyses. We quantify these gains by means of a hypothetical example. The tool significantly reduces the time of analysis and the needs for expertise. An indirect gain is that sensitivity and cost-benefit analyses can be more easily realized.
Maia, A M A; Karlsson, L; Margulis, W; Gomes, A S L
2011-10-01
The aim of this paper was to evaluate a transillumination (TI) system using near-infrared (NIR) light and bitewing radiographs for the detection of early approximal enamel caries lesions. Mesiodistal sections of teeth (n = 14) were cut with various thicknesses from 1.5 mm to 4.75 mm. Both sides of each section were included, 17 approximal surfaces with natural enamel caries and 11 surfaces considered intact. The approximal surfaces were illuminated by NIR light and X-ray. Captured images were analysed by two calibrated specialists in radiology, and re-analysed after 6 months using stereomicroscope images as a gold standard. The interexaminer reliability (Kappa test statistic) for the NIR TI technique showed moderate agreement on first (0.55) and second (0.48) evaluation, and low agreement for bitewing radiographs on first (0.26) and second (0.32) evaluation. In terms of accuracy, the sensitivity for the NIR TI system was 0.88 and the specificity was 0.72. For the bitewing radiographs the sensitivity ranged from 0.35 to 0.53 and the specificity ranged from 0.50 to 0.72. In the same samples and conditions tested, NIR TI images showed reliability and the enamel caries surfaces were better identified than on dental radiographs.
Maia, A M A; Karlsson, L; Margulis, W; Gomes, A S L
2011-01-01
Objectives The aim of this paper was to evaluate a transillumination (TI) system using near-infrared (NIR) light and bitewing radiographs for the detection of early approximal enamel caries lesions. Methods Mesiodistal sections of teeth (n = 14) were cut with various thicknesses from 1.5 mm to 4.75 mm. Both sides of each section were included, 17 approximal surfaces with natural enamel caries and 11 surfaces considered intact. The approximal surfaces were illuminated by NIR light and X-ray. Captured images were analysed by two calibrated specialists in radiology, and re-analysed after 6 months using stereomicroscope images as a gold standard. Results The interexaminer reliability (Kappa test statistic) for the NIR TI technique showed moderate agreement on first (0.55) and second (0.48) evaluation, and low agreement for bitewing radiographs on first (0.26) and second (0.32) evaluation. In terms of accuracy, the sensitivity for the NIR TI system was 0.88 and the specificity was 0.72. For the bitewing radiographs the sensitivity ranged from 0.35 to 0.53 and the specificity ranged from 0.50 to 0.72. Conclusion In the same samples and conditions tested, NIR TI images showed reliability and the enamel caries surfaces were better identified than on dental radiographs. PMID:21960400
A Primer on Health Economic Evaluations in Thoracic Oncology.
Whittington, Melanie D; Atherly, Adam J; Bocsi, Gregary T; Camidge, D Ross
2016-08-01
There is growing interest for economic evaluation in oncology to illustrate the value of multiple new diagnostic and therapeutic interventions. As these analyses have started to move from specialist publications into mainstream medical literature, the wider medical audience consuming this information may need additional education to evaluate it appropriately. Here we review standard practices in economic evaluation, illustrating the different methods with thoracic oncology examples where possible. When interpreting and conducting health economic studies, it is important to appraise the method, perspective, time horizon, modeling technique, discount rate, and sensitivity analysis. Guidance on how to do this is provided. To provide a method to evaluate this literature, a literature search was conducted in spring 2015 to identify economic evaluations published in the Journal of Thoracic Oncology. Articles were reviewed for their study design, and areas for improvement were noted. Suggested improvements include using more rigorous sensitivity analyses, adopting a standard approach to reporting results, and conducting complete economic evaluations. Researchers should design high-quality studies to ensure the validity of the results, and consumers of this research should interpret these studies critically on the basis of a full understanding of the methodologies used before considering any of the conclusions. As advancements occur on both the research and consumer sides, this literature can be further developed to promote the best use of resources for this field. Copyright © 2016 International Association for the Study of Lung Cancer. Published by Elsevier Inc. All rights reserved.
Digital imaging biomarkers feed machine learning for melanoma screening.
Gareau, Daniel S; Correa da Rosa, Joel; Yagerman, Sarah; Carucci, John A; Gulati, Nicholas; Hueto, Ferran; DeFazio, Jennifer L; Suárez-Fariñas, Mayte; Marghoob, Ashfaq; Krueger, James G
2017-07-01
We developed an automated approach for generating quantitative image analysis metrics (imaging biomarkers) that are then analysed with a set of 13 machine learning algorithms to generate an overall risk score that is called a Q-score. These methods were applied to a set of 120 "difficult" dermoscopy images of dysplastic nevi and melanomas that were subsequently excised/classified. This approach yielded 98% sensitivity and 36% specificity for melanoma detection, approaching sensitivity/specificity of expert lesion evaluation. Importantly, we found strong spectral dependence of many imaging biomarkers in blue or red colour channels, suggesting the need to optimize spectral evaluation of pigmented lesions. © 2016 The Authors. Experimental Dermatology Published by John Wiley & Sons Ltd.
Ellis, Alicia M.; Garcia, Andres J.; Focks, Dana A.; Morrison, Amy C.; Scott, Thomas W.
2011-01-01
Models can be useful tools for understanding the dynamics and control of mosquito-borne disease. More detailed models may be more realistic and better suited for understanding local disease dynamics; however, evaluating model suitability, accuracy, and performance becomes increasingly difficult with greater model complexity. Sensitivity analysis is a technique that permits exploration of complex models by evaluating the sensitivity of the model to changes in parameters. Here, we present results of sensitivity analyses of two interrelated complex simulation models of mosquito population dynamics and dengue transmission. We found that dengue transmission may be influenced most by survival in each life stage of the mosquito, mosquito biting behavior, and duration of the infectious period in humans. The importance of these biological processes for vector-borne disease models and the overwhelming lack of knowledge about them make acquisition of relevant field data on these biological processes a top research priority. PMID:21813844
Numerical, mathematical models of water and chemical movement in soils are used as decision aids for determining soil screening levels (SSLs) of radionuclides in the unsaturated zone. Many models require extensive input parameters which include uncertainty due to soil variabil...
NASA Astrophysics Data System (ADS)
Hameed, M.; Demirel, M. C.; Moradkhani, H.
2015-12-01
Global Sensitivity Analysis (GSA) approach helps identify the effectiveness of model parameters or inputs and thus provides essential information about the model performance. In this study, the effects of the Sacramento Soil Moisture Accounting (SAC-SMA) model parameters, forcing data, and initial conditions are analysed by using two GSA methods: Sobol' and Fourier Amplitude Sensitivity Test (FAST). The simulations are carried out over five sub-basins within the Columbia River Basin (CRB) for three different periods: one-year, four-year, and seven-year. Four factors are considered and evaluated by using the two sensitivity analysis methods: the simulation length, parameter range, model initial conditions, and the reliability of the global sensitivity analysis methods. The reliability of the sensitivity analysis results is compared based on 1) the agreement between the two sensitivity analysis methods (Sobol' and FAST) in terms of highlighting the same parameters or input as the most influential parameters or input and 2) how the methods are cohered in ranking these sensitive parameters under the same conditions (sub-basins and simulation length). The results show the coherence between the Sobol' and FAST sensitivity analysis methods. Additionally, it is found that FAST method is sufficient to evaluate the main effects of the model parameters and inputs. Another conclusion of this study is that the smaller parameter or initial condition ranges, the more consistency and coherence between the sensitivity analysis methods results.
Herpes zoster vaccine: A health economic evaluation for Switzerland.
Blank, Patricia R; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D; Schwenkglenks, Matthias
2017-07-03
Herpes zoster (HZ) or "shingles" results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65-79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65-79 y.
Herpes zoster vaccine: A health economic evaluation for Switzerland
Blank, Patricia R.; Ademi, Zanfina; Lu, Xiaoyan; Szucs, Thomas D.; Schwenkglenks, Matthias
2017-01-01
ABSTRACT Herpes zoster (HZ) or “shingles” results from a reactivation of the varicella zoster virus (VZV) acquired during primary infection (chickenpox) and surviving in the dorsal root ganglia. In about 20% of cases, a complication occurs, known as post-herpetic neuralgia (PHN). A live attenuated vaccine against VZV is available for the prevention of HZ and subsequent PHN. The present study aims to update an earlier evaluation estimating the cost-effectiveness of the HZ vaccine from a Swiss third party payer perspective. It takes into account updated vaccine prices, a different age cohort, latest clinical data and burden of illness data. A Markov model was developed to simulate the lifetime consequences of vaccinating 15% of the Swiss population aged 65–79 y. Information from sentinel data, official statistics and published literature were used. Endpoints assessed were number of HZ and PHN cases, quality-adjusted life years (QALYs), costs of hospitalizations, consultations and prescriptions. Based on a vaccine price of CHF 162, the vaccination strategy accrued additional costs of CHF 17,720,087 and gained 594 QALYs. The incremental cost-effectiveness ratio (ICER) was CHF 29,814 per QALY gained. Sensitivity analyses showed that the results were most sensitive to epidemiological inputs, utility values, discount rates, duration of vaccine efficacy, and vaccine price. Probabilistic sensitivity analyses indicated a more than 99% chance that the ICER was below 40,000 CHF per QALY. Findings were in line with existing cost-effectiveness analyses of HZ vaccination. This updated study supports the value of an HZ vaccination strategy targeting the Swiss population aged 65–79 y. PMID:28481678
pH sensitive thiolated cationic hydrogel for oral insulin delivery.
Sonia, T A; Sharma, Chandra P
2014-04-01
The objective of this work is to study the efficacy of pH sensitive thiolated Polydimethylaminoethylmethacrylate for oral delivery of insulin. Synthesis of pH sensitive thiolated Polydimethylaminoethylmethacrylate (PDCPA) was carried out by crosslinking Polymethacrylic acid with thiolated Polydimethylaminoethylmethacrylate (PDCys) via carbodiimide chemistry. Prior to in vivo experiment, various physicochemical and biological characterisation were carried out to evaluate the efficacy of PDCPA. Modification was confirmed by IR and NMR spectroscopy. The particle size was found to be 284 nm with a zeta potential of 37.3+/-1.58 mV. Texture analyser measurements showed that PDCPA is more mucoadhesive than the parent polymer. Transepithelial electrical measurements showed a reduction of greater than 50% on incubation with PDCPA particles. Permeation studies showed that PDCPA is more permeable than the parent polymer. On in vivo evaluation on male diabetic rats, insulin loaded PDCPA exhibited a blood glucose reduction of 19%.
NASA Astrophysics Data System (ADS)
Kalra, Tarandeep S.; Aretxabaleta, Alfredo; Seshadri, Pranay; Ganju, Neil K.; Beudin, Alexis
2017-12-01
Coastal hydrodynamics can be greatly affected by the presence of submerged aquatic vegetation. The effect of vegetation has been incorporated into the Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system. The vegetation implementation includes the plant-induced three-dimensional drag, in-canopy wave-induced streaming, and the production of turbulent kinetic energy by the presence of vegetation. In this study, we evaluate the sensitivity of the flow and wave dynamics to vegetation parameters using Sobol' indices and a least squares polynomial approach referred to as the Effective Quadratures method. This method reduces the number of simulations needed for evaluating Sobol' indices and provides a robust, practical, and efficient approach for the parameter sensitivity analysis. The evaluation of Sobol' indices shows that kinetic energy, turbulent kinetic energy, and water level changes are affected by plant stem density, height, and, to a lesser degree, diameter. Wave dissipation is mostly dependent on the variation in plant stem density. Performing sensitivity analyses for the vegetation module in COAWST provides guidance to optimize efforts and reduce exploration of parameter space for future observational and modeling work.
Combustor liner durability analysis
NASA Technical Reports Server (NTRS)
Moreno, V.
1981-01-01
An 18 month combustor liner durability analysis program was conducted to evaluate the use of advanced three dimensional transient heat transfer and nonlinear stress-strain analyses for modeling the cyclic thermomechanical response of a simulated combustor liner specimen. Cyclic life prediction technology for creep/fatigue interaction is evaluated for a variety of state-of-the-art tools for crack initiation and propagation. The sensitivity of the initiation models to a change in the operating conditions is also assessed.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2015-02-25
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index on 2 October 2014. We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. Based on the combined Cochrane Hepato-Biliary Group score, we classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2017-05-18
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, Embase, Science Citation Index Expanded and Conference Proceedings Citation Index - Science, and LILACS (May 2017). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Branched-chain amino acids for people with hepatic encephalopathy.
Gluud, Lise Lotte; Dam, Gitte; Les, Iñigo; Córdoba, Juan; Marchesini, Giulio; Borre, Mette; Aagaard, Niels Kristian; Vilstrup, Hendrik
2015-09-17
Hepatic encephalopathy is a brain dysfunction with neurological and psychiatric changes associated with liver insufficiency or portal-systemic shunting. The severity ranges from minor symptoms to coma. A Cochrane systematic review including 11 randomised clinical trials on branched-chain amino acids (BCAA) versus control interventions has evaluated if BCAA may benefit people with hepatic encephalopathy. To evaluate the beneficial and harmful effects of BCAA versus any control intervention for people with hepatic encephalopathy. We identified trials through manual and electronic searches in The Cochrane Hepato-Biliary Group Controlled Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, EMBASE, and Science Citation Index (August 2015). We included randomised clinical trials, irrespective of the bias control, language, or publication status. The authors independently extracted data based on published reports and collected data from the primary investigators. We changed our primary outcomes in this update of the review to include mortality (all cause), hepatic encephalopathy (number of people without improved manifestations of hepatic encephalopathy), and adverse events. The analyses included random-effects and fixed-effect meta-analyses. We performed subgroup, sensitivity, regression, and trial sequential analyses to evaluate sources of heterogeneity (including intervention, and participant and trial characteristics), bias (using The Cochrane Hepato-Biliary Group method), small-study effects, and the robustness of the results after adjusting for sparse data and multiplicity. We graded the quality of the evidence using the GRADE approach. We found 16 randomised clinical trials including 827 participants with hepatic encephalopathy classed as overt (12 trials) or minimal (four trials). Eight trials assessed oral BCAA supplements and seven trials assessed intravenous BCAA. The control groups received placebo/no intervention (two trials), diets (10 trials), lactulose (two trials), or neomycin (two trials). In 15 trials, all participants had cirrhosis. We classed seven trials as low risk of bias and nine trials as high risk of bias (mainly due to lack of blinding or for-profit funding). In a random-effects meta-analysis of mortality, we found no difference between BCAA and controls (risk ratio (RR) 0.88, 95% confidence interval (CI) 0.69 to 1.11; 760 participants; 15 trials; moderate quality of evidence). We found no evidence of small-study effects. Sensitivity analyses of trials with a low risk of bias found no beneficial or detrimental effect of BCAA on mortality. Trial sequential analysis showed that the required information size was not reached, suggesting that additional evidence was needed. BCAA had a beneficial effect on hepatic encephalopathy (RR 0.73, 95% CI 0.61 to 0.88; 827 participants; 16 trials; high quality of evidence). We found no small-study effects and confirmed the beneficial effect of BCAA in a sensitivity analysis that only included trials with a low risk of bias (RR 0.71, 95% CI 0.52 to 0.96). The trial sequential analysis showed that firm evidence was reached. In a fixed-effect meta-analysis, we found that BCAA increased the risk of nausea and vomiting (RR 5.56; 2.93 to 10.55; moderate quality of evidence). We found no beneficial or detrimental effects of BCAA on nausea or vomiting in a random-effects meta-analysis or on quality of life or nutritional parameters. We did not identify predictors of the intervention effect in the subgroup, sensitivity, or meta-regression analyses. In sensitivity analyses that excluded trials with a lactulose or neomycin control, BCAA had a beneficial effect on hepatic encephalopathy (RR 0.76, 95% CI 0.63 to 0.92). Additional sensitivity analyses found no difference between BCAA and lactulose or neomycin (RR 0.66, 95% CI 0.34 to 1.30). In this updated review, we included five additional trials. The analyses showed that BCAA had a beneficial effect on hepatic encephalopathy. We found no effect on mortality, quality of life, or nutritional parameters, but we need additional trials to evaluate these outcomes. Likewise, we need additional randomised clinical trials to determine the effect of BCAA compared with interventions such as non-absorbable disaccharides, rifaximin, or other antibiotics.
Williams, Javonda; Nelson-Gardell, Debra; Coulborn Faller, Kathleen; Tishelman, Amy; Cordisco-Steele, Linda
2014-01-01
Using data from a survey of perceptions of 932 child welfare professionals about the utility of extended assessments, the researchers constructed a scale to measure respondents' views about sensitivity (ensuring sexually abused children are correctly identified) and specificity (ensuring nonabused children are correctly identified) in child sexual abuse evaluations. On average, respondents scored high (valuing sensitivity) on the sensitivity versus specificity scale. Next, the researchers undertook bivariate analyses to identify independent variables significantly associated with the sensitivity versus specificity scale. Then those variables were entered into a multiple regression. Four independent variables were significantly related to higher sensitivity scores: encountering cases requiring extended assessments, valuing extended assessments among scarce resources, less concern about proving cases in court, and viewing the goal of extended assessments as understanding needs of child and family (adjusted R2 = .34).
NASA Astrophysics Data System (ADS)
Luce, Charles H.; Lopez-Burgos, Viviana; Holden, Zachary
2014-12-01
Empirical sensitivity analyses are important for evaluation of the effects of a changing climate on water resources and ecosystems. Although mechanistic models are commonly applied for evaluation of climate effects for snowmelt, empirical relationships provide a first-order validation of the various postulates required for their implementation. Previous studies of empirical sensitivity for April 1 snow water equivalent (SWE) in the western United States were developed by regressing interannual variations in SWE to winter precipitation and temperature. This offers a temporal analog for climate change, positing that a warmer future looks like warmer years. Spatial analogs are used to hypothesize that a warmer future may look like warmer places, and are frequently applied alternatives for complex processes, or states/metrics that show little interannual variability (e.g., forest cover). We contrast spatial and temporal analogs for sensitivity of April 1 SWE and the mean residence time of snow (SRT) using data from 524 Snowpack Telemetry (SNOTEL) stations across the western U.S. We built relatively strong models using spatial analogs to relate temperature and precipitation climatology to snowpack climatology (April 1 SWE, R2=0.87, and SRT, R2=0.81). Although the poorest temporal analog relationships were in areas showing the highest sensitivity to warming, spatial analog models showed consistent performance throughout the range of temperature and precipitation. Generally, slopes from the spatial relationships showed greater thermal sensitivity than the temporal analogs, and high elevation stations showed greater vulnerability using a spatial analog than shown in previous modeling and sensitivity studies. The spatial analog models provide a simple perspective to evaluate potential futures and may be useful in further evaluation of snowpack with warming.
Garcia-Menendez, Fernando; Hu, Yongtao; Odman, Mehmet T
2014-09-15
Air quality forecasts generated with chemical transport models can provide valuable information about the potential impacts of fires on pollutant levels. However, significant uncertainties are associated with fire-related emission estimates as well as their distribution on gridded modeling domains. In this study, we explore the sensitivity of fine particulate matter concentrations predicted by a regional-scale air quality model to the spatial and temporal allocation of fire emissions. The assessment was completed by simulating a fire-related smoke episode in which air quality throughout the Atlanta metropolitan area was affected on February 28, 2007. Sensitivity analyses were carried out to evaluate the significance of emission distribution among the model's vertical layers, along the horizontal plane, and into hourly inputs. Predicted PM2.5 concentrations were highly sensitive to emission injection altitude relative to planetary boundary layer height. Simulations were also responsive to the horizontal allocation of fire emissions and their distribution into single or multiple grid cells. Additionally, modeled concentrations were greatly sensitive to the temporal distribution of fire-related emissions. The analyses demonstrate that, in addition to adequate estimates of emitted mass, successfully modeling the impacts of fires on air quality depends on an accurate spatiotemporal allocation of emissions. Copyright © 2014 Elsevier B.V. All rights reserved.
Sensitivity of water resources in the Delaware River basin to climate variability and change
Ayers, Mark A.; Wolock, David M.; McCabe, Gregory J.; Hay, Lauren E.; Tasker, Gary D.
1993-01-01
Because of the "greenhouse effect," projected increases in atmospheric carbon dioxide levels might cause global warming, which in turn could result in changes in precipitation patterns and evapotranspiration and in increases in sea level. This report describes the greenhouse effect; discusses the problems and uncertainties associated with the detection, prediction, and effects of climatic change, and presents the results of sensitivity-analysis studies of the potential effects of climate change on water resources in the Delaware River basin. On the basis of sensitivity analyses, potentially serious shortfalls of certain water resources in the basin could result if some climatic-change scenarios become true. The results of basin streamflow-model simulations in this study demonstrate the difficulty in distinguishing effects of climatic change on streamflow and water supply from effects of natural variability in current climate. The future direction of basin changes in most water resources, furthermore, cannot be determined precisely because of uncertainty in current projections of regional temperature and precipitation. This large uncertainty indicates that, for resource planning, information defining the sensitivities of water resources to a range of climate change is most relevant. The sensitivity analyses could be useful in developing contingency plans on how to evaluate and respond to changes, should they occur.
Oliveira, Maria Regina Fernandes; Leandro, Roseli; Decimoni, Tassia Cristina; Rozman, Luciana Martins; Novaes, Hillegonda Maria Dutilh; De Soárez, Patrícia Coelho
2017-08-01
The aim of this study is to identify and characterize the health economic evaluations (HEEs) of diagnostic tests conducted in Brazil, in terms of their adherence to international guidelines for reporting economic studies and specific questions in test accuracy reports. We systematically searched multiple databases, selecting partial and full HEEs of diagnostic tests, published between 1980 and 2013. Two independent reviewers screened articles for relevance and extracted the data. We performed a qualitative narrative synthesis. Forty-three articles were reviewed. The most frequently studied diagnostic tests were laboratory tests (37.2%) and imaging tests (32.6%). Most were non-invasive tests (51.2%) and were performed in the adult population (48.8%). The intended purposes of the technologies evaluated were mostly diagnostic (69.8%), but diagnosis and treatment and screening, diagnosis, and treatment accounted for 25.6% and 4.7%, respectively. Of the reviewed studies, 12.5% described the methods used to estimate the quantities of resources, 33.3% reported the discount rate applied, and 29.2% listed the type of sensitivity analysis performed. Among the 12 cost-effectiveness analyses, only two studies (17%) referred to the application of formal methods to check the quality of the accuracy studies that provided support for the economic model. The existing Brazilian literature on the HEEs of diagnostic tests exhibited reasonably good performance. However, the following points still require improvement: 1) the methods used to estimate resource quantities and unit costs, 2) the discount rate, 3) descriptions of sensitivity analysis methods, 4) reporting of conflicts of interest, 5) evaluations of the quality of the accuracy studies considered in the cost-effectiveness models, and 6) the incorporation of accuracy measures into sensitivity analyses.
Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M
2015-01-01
Objective Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. Setting All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Primary and secondary outcome measures Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. Results The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. Conclusions The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. PMID:25986635
Validation of a patient-centered culturally sensitive health care office staff inventory.
Tucker, Carolyn M; Wall, Whitney; Marsiske, Michael; Nghiem, Khanh; Roncoroni, Julia
2015-09-01
Research suggests that patient-perceived culturally sensitive health care encompasses multiple components of the health care delivery system including the cultural sensitivity of front desk office staff. Despite this, research on culturally sensitive health care focuses almost exclusively on provider behaviors, attitudes, and knowledge. This is due in part to the paucity of instruments available to assess the cultural sensitivity of front desk office staff. Thus, the objective of the present study is to determine the psychometric properties of the pilot Tucker-Culturally Sensitive Health Care Office Staff Inventory-Patient Form (T-CSHCOSI-PF), which is an instrument designed to enable patients to evaluate the patient-defined cultural sensitivity of their front desk office staff. A sample of 1648 adult patients was recruited by staff at 67 health care sites across the United States. These patients anonymously completed the T-CSHCOSI-PF, a demographic data questionnaire, and a patient satisfaction questionnaire. Findings Confirmatory factor analyses of the TCSHCOSI-PF revealed that this inventory has two factors with high internal consistency reliability and validity (Cronbach's αs=0.97 and 0.95). It is concluded that the T-CSHCOSI-PF is a psychometrically strong and useful inventory for assessing the cultural sensitivity of front desk office staff. This inventory can be used to support culturally sensitive health care research, evaluate the job performance of front desk office staff, and aid in the development of trainings designed to improve the cultural sensitivity of these office staff.
Noben, Cindy; Vilsteren, Myrthe van; Boot, Cécile; Steenbeek, Romy; Schaardenburg, Dirkjan van; Anema, Johannes R; Evers, Silvia; Nijhuis, Frans; Rijk, Angelique de
2017-05-25
Evaluating the cost effectiveness and cost utility of an integrated care intervention and participatory workplace intervention for workers with rheumatoid arthritis (RA) to improve their work productivity. Twelve month follow-up economic evaluation alongside a randomized controlled trial (RCT) within specialized rheumatology treatment centers. Adults diagnosed with RA between 18-64 years, in a paid job for at least eight hours per week, experiencing minor difficulties in work functioning were randomized to the intervention (n = 75) or the care-as-usual (CAU) group (n = 75). Effect outcomes were productivity and quality of life (QALYs). Costs associated with healthcare, patient and family, productivity, and intervention were calculated from a societal perspective. Cost effectiveness and cost utility were assessed to indicate the incremental costs and benefits per additional unit of effect. Subgroup and sensitivity analyses evaluated the robustness of the findings. At-work productivity loss was about 4.6 hours in the intervention group and 3.5 hours in the care as usual (CAU) group per two weeks. Differences in QALY were negligible; 0.77 for the CAU group and 0.74 for the intervention group. In total, average costs after twelve months follow-up were highest in the intervention group (€7,437.76) compared to the CAU group (€5,758.23). The cost-effectiveness and cost-utility analyses show that the intervention was less effective and (often) more expensive when compared to CAU. Sensitivity analyses supported these findings. The integrated care intervention and participatory workplace intervention for workers with RA provides gains neither in productivity at the workplace nor in quality of life. These results do not justify the additional costs.
Noben, Cindy; van Vilsteren, Myrthe; Boot, Cécile; Steenbeek, Romy; van Schaardenburg, Dirkjan; Anema, Johannes R.; Evers, Silvia; Nijhuis, Frans; de Rijk, Angelique
2017-01-01
Objectives: Evaluating the cost effectiveness and cost utility of an integrated care intervention and participatory workplace intervention for workers with rheumatoid arthritis (RA) to improve their work productivity. Methods: Twelve month follow-up economic evaluation alongside a randomized controlled trial (RCT) within specialized rheumatology treatment centers. Adults diagnosed with RA between 18-64 years, in a paid job for at least eight hours per week, experiencing minor difficulties in work functioning were randomized to the intervention (n = 75) or the care-as-usual (CAU) group (n = 75). Effect outcomes were productivity and quality of life (QALYs). Costs associated with healthcare, patient and family, productivity, and intervention were calculated from a societal perspective. Cost effectiveness and cost utility were assessed to indicate the incremental costs and benefits per additional unit of effect. Subgroup and sensitivity analyses evaluated the robustness of the findings. Results: At-work productivity loss was about 4.6 hours in the intervention group and 3.5 hours in the care as usual (CAU) group per two weeks. Differences in QALY were negligible; 0.77 for the CAU group and 0.74 for the intervention group. In total, average costs after twelve months follow-up were highest in the intervention group (€7,437.76) compared to the CAU group (€5,758.23). The cost-effectiveness and cost-utility analyses show that the intervention was less effective and (often) more expensive when compared to CAU. Sensitivity analyses supported these findings. Discussion: The integrated care intervention and participatory workplace intervention for workers with RA provides gains neither in productivity at the workplace nor in quality of life. These results do not justify the additional costs. PMID:28381814
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Floor vibration evaluations for medical facilities
NASA Astrophysics Data System (ADS)
Himmel, Chad N.
2003-10-01
The structural floor design for new medical facilities is often selected early in the design phase and in renovation projects, the floor structure already exists. Because the floor structure can often have an influence on the location of vibration sensitive medical equipment and facilities, it is becoming necessary to identify the best locations for equipment and facilities early in the design process. Even though specific criteria for vibration-sensitive uses and equipment may not always be available early in the design phase, it should be possible to determine compatible floor structures for planned vibration-sensitive uses by comparing conceptual layouts with generic floor vibration criteria. Relatively simple evaluations of planned uses and generic criteria, combined with on-site vibration and noise measurements early in design phase, can significantly reduce future design problems and expense. Concepts of evaluation procedures and analyses will be presented in this paper. Generic floor vibration criteria and appropriate parameters to control resonant floor vibration and noise will be discussed for typical medical facilities and medical research facilities. Physical, economic, and logistical limitations that affect implementation will be discussed through case studies.
Automatic Single Event Effects Sensitivity Analysis of a 13-Bit Successive Approximation ADC
NASA Astrophysics Data System (ADS)
Márquez, F.; Muñoz, F.; Palomo, F. R.; Sanz, L.; López-Morillo, E.; Aguirre, M. A.; Jiménez, A.
2015-08-01
This paper presents Analog Fault Tolerant University of Seville Debugging System (AFTU), a tool to evaluate the Single-Event Effect (SEE) sensitivity of analog/mixed signal microelectronic circuits at transistor level. As analog cells can behave in an unpredictable way when critical areas interact with the particle hitting, there is a need for designers to have a software tool that allows an automatic and exhaustive analysis of Single-Event Effects influence. AFTU takes the test-bench SPECTRE design, emulates radiation conditions and automatically evaluates vulnerabilities using user-defined heuristics. To illustrate the utility of the tool, the SEE sensitivity of a 13-bits Successive Approximation Analog-to-Digital Converter (ADC) has been analysed. This circuit was selected not only because it was designed for space applications, but also due to the fact that a manual SEE sensitivity analysis would be too time-consuming. After a user-defined test campaign, it was detected that some voltage transients were propagated to a node where a parasitic diode was activated, affecting the offset cancelation, and therefore the whole resolution of the ADC. A simple modification of the scheme solved the problem, as it was verified with another automatic SEE sensitivity analysis.
Satellite Power Systems (SPS) concept definition study. Volume 4: Transportation analysis
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1980-01-01
Transportation system elements were synthesized and evaluated on the basis of their potential to satisfy overall satellite (SPS) transportation requirements and of their sensitivities, interfaces, and impact on the SPS. Additional analyses and investigations were conducted to further define transportation system concepts that will be needed for the developmental and operational phases of an SPS program. To accomplish these objectives, transportation systems such as shuttle and its derivatives have been identified; new heavy lift launch vehicle concepts, cargo and personnel orbital transfer vehicles and intra-orbit transfer vehicle concepts have been evaluated. To a limited degree, the program implications of their operations and costs were assessed. The results of these analyses have been integrated into other elements of the overall SPS concept definition studies.
Sensitivity and Specificity of Polysomnographic Criteria for Defining Insomnia
Edinger, Jack D.; Ulmer, Christi S.; Means, Melanie K.
2013-01-01
Study Objectives: In recent years, polysomnography-based eligibility criteria have been increasingly used to identify candidates for insomnia research, and this has been particularly true of studies evaluating pharmacologic therapy for primary insomnia. However, the sensitivity and specificity of PSG for identifying individuals with insomnia is unknown, and there is no consensus on the criteria sets which should be used for participant selection. In the current study, an archival data set was used to test the sensitivity and specificity of PSG measures for identifying individuals with primary insomnia in both home and lab settings. We then evaluated the sensitivity and specificity of the eligibility criteria employed in a number of recent insomnia trials for identifying primary insomnia sufferers in our sample. Design: Archival data analysis. Settings: Study participants' homes and a clinical sleep laboratory. Participants: Adults: 76 with primary insomnia and 78 non-complaining normal sleepers. Measurements and Results: ROC and cross-tabs analyses were used to evaluate the sensitivity and specificity of PSG-derived total sleep time, latency to persistent sleep, wake after sleep onset, and sleep efficiency for discriminating adults with primary insomnia from normal sleepers. None of the individual criteria accurately discriminated PI from normal sleepers, and none of the criteria sets used in recent trials demonstrated acceptable sensitivity and specificity for identifying primary insomnia. Conclusions: The use of quantitative PSG-based selection criteria in insomnia research may exclude many who meet current diagnostic criteria for an insomnia disorder. Citation: Edinger JD; Ulmer CS; Means MK. Sensitivity and specificity of polysomnographic criteria for defining insomnia. J Clin Sleep Med 2013;9(5):481-491. PMID:23674940
Bajer, P.G.; Wildhaber, M.L.
2007-01-01
Demographic models for the shovelnose (Scaphirhynchus platorynchus) and pallid (S. albus) sturgeons in the Lower Missouri River were developed to conduct sensitivity analyses for both populations. Potential effects of increased fishing mortality on the shovelnose sturgeon were also evaluated. Populations of shovelnose and pallid sturgeon were most sensitive to age-0 mortality rates as well as mortality rates of juveniles and young adults. Overall, fecundity was a less sensitive parameter. However, increased fecundity effectively balanced higher mortality among sensitive age classes in both populations. Management that increases population-level fecundity and improves survival of age-0, juveniles, and young adults should most effectively benefit both populations. Evaluation of reproductive values indicated that populations of pallid sturgeon dominated by ages ≥35 could rapidly lose their potential for growth, particularly if recruitment remains low. Under the initial parameter values portraying current conditions the population of shovelnose sturgeon was predicted to decline by 1.65% annually, causing the commercial yield to also decline. Modeling indicated that the commercial yield could increase substantially if exploitation of females in ages ≤12 was highly restricted.
Martín-Navarro, Carmen M; Lorenzo-Morales, Jacob; Cabrera-Serra, M Gabriela; Rancel, Fernando; Coronado-Alvarez, Nieves M; Piñero, José E; Valladares, Basilio
2008-11-01
Pathogenic strains of the genus Acanthamoeba are causative agents of a serious sight-threatening infection of the eye known as Acanthamoeba keratitis. The prevalence of this infection has risen in the past 20 years, mainly due to the increase in number of contact lens wearers. In this study, the prevalence of Acanthamoeba in a risk group constituted by asymptomatic contact lens wearers from Tenerife, Canary Islands, Spain, was evaluated. Contact lenses and contact lens cases were analysed for the presence of Acanthamoeba isolates. The isolates' genotypes were also determined after rDNA sequencing. The pathogenic potential of the isolated strains was subsequently established using previously described molecular and biochemical assays, which allowed the selection of three strains with high pathogenic potential. Furthermore, the sensitivity of these isolates against two standard drugs, ciprofloxacin and chlorhexidine, was analysed. As the three selected strains were sensitive to chlorhexidine, its activity and IC(50) were evaluated. Chlorhexidine was found to be active against these strains and the obtained IC(50) values were compared to the concentrations of this drug present in contact lens maintenance solutions. It was observed that the measured IC(50) was higher than the concentration found in these maintenance solutions. Therefore, the ineffectiveness of chlorhexidine-containing contact lens maintenance solutions against potentially pathogenic strains of Acanthamoeba is demonstrated in this study.
Jensen, Cathrine Elgaard; Riis, Allan; Petersen, Karin Dam; Jensen, Martin Bach; Pedersen, Kjeld Møller
2017-05-01
In connection with the publication of a clinical practice guideline on the management of low back pain (LBP) in general practice in Denmark, a cluster randomised controlled trial was conducted. In this trial, a multifaceted guideline implementation strategy to improve general practitioners' treatment of patients with LBP was compared with a usual implementation strategy. The aim was to determine whether the multifaceted strategy was cost effective, as compared with the usual implementation strategy. The economic evaluation was conducted as a cost-utility analysis where cost collected from a societal perspective and quality-adjusted life years were used as outcome measures. The analysis was conducted as a within-trial analysis with a 12-month time horizon consistent with the follow-up period of the clinical trial. To adjust for a priori selected covariates, generalised linear models with a gamma family were used to estimate incremental costs and quality-adjusted life years. Furthermore, both deterministic and probabilistic sensitivity analyses were conducted. Results showed that costs associated with primary health care were higher, whereas secondary health care costs were lower for the intervention group when compared with the control group. When adjusting for covariates, the intervention was less costly, and there was no significant difference in effect between the 2 groups. Sensitivity analyses showed that results were sensitive to uncertainty. In conclusion, the multifaceted implementation strategy was cost saving when compared with the usual strategy for implementing LBP clinical practice guidelines in general practice. Furthermore, there was no significant difference in effect, and the estimate was sensitive to uncertainty.
SVDS plume impingement modeling development. Sensitivity analysis supporting level B requirements
NASA Technical Reports Server (NTRS)
Chiu, P. B.; Pearson, D. J.; Muhm, P. M.; Schoonmaker, P. B.; Radar, R. J.
1977-01-01
A series of sensitivity analyses (trade studies) performed to select features and capabilities to be implemented in the plume impingement model is described. Sensitivity analyses were performed in study areas pertaining to geometry, flowfield, impingement, and dynamical effects. Recommendations based on these analyses are summarized.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadgu, Teklu; Appel, Gordon John
Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the currentmore » analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.« less
NASA Technical Reports Server (NTRS)
Kirshen, N.; Mill, T.
1973-01-01
The effect of formulation components and the addition of fire retardants on the impact sensitivity of Viton B fluoroelastomer in liquid oxygen was studied with the objective of developing a procedure for reliably reducing this sensitivity. Component evaluation, carried out on more than 40 combinations of components and cure cycles, showed that almost all the standard formulation agents, including carbon, MgO, Diak-3, and PbO2, will sensitize the Viton stock either singly or in combinations, some combinations being much more sensitive than others. Cure and postcure treatments usually reduced the sensitivity of a given formulation, often dramatically, but no formulated Viton was as insensitive as the pure Viton B stock. Coating formulated Viton with a thin layer of pure Viton gave some indication of reduced sensitivity, but additional tests are needed. It is concluded that sensitivity in formulated Viton arises from a variety of sources, some physical and some chemical in origin. Elemental analyses for all the formulated Vitons are reported as are the results of a literature search on the subject of LOX impact sensitivity.
Old Wine in New Skins: The Sensitivity of Established Findings to New Methods
ERIC Educational Resources Information Center
Foster, E. Michael; Wiley-Exley, Elizabeth; Bickman, Leonard
2009-01-01
Findings from an evaluation of a model system for delivering mental health services to youth were reassessed to determine the robustness of key findings to the use of methodologies unavailable to the original analysts. These analyses address a key concern about earlier findings--that the quasi-experimental design involved the comparison of two…
A high-resolution dissolved oxygen mass balance model was developed for the Louisiana coastal shelf in the northern Gulf of Mexico. GoMDOM (Gulf of Mexico Dissolved Oxygen Model) was developed to assist in evaluating the impacts of nutrient loading on hypoxia development and exte...
Spelling in Written Stories by School-Age Children with Cochlear Implants
ERIC Educational Resources Information Center
Straley, Sara G.; Werfel, Krystal L.; Hendricks, Alison Eisel
2016-01-01
This study evaluated the spelling of 3rd to 6th grade children with cochlear implants in written stories. Spelling was analysed using traditional correct/incorrect scoring as well as the Spelling Sensitivity Score, which provides linguistic information about spelling attempts. Children with cochlear implants spelled 86 per cent of words in stories…
Nelson, S D; Nelson, R E; Cannon, G W; Lawrence, P; Battistone, M J; Grotzke, M; Rosenblum, Y; LaFleur, J
2014-12-01
This is a cost-effectiveness analysis of training rural providers to identify and treat osteoporosis. Results showed a slight cost savings, increase in life years, increase in treatment rates, and decrease in fracture incidence. However, the results were sensitive to small differences in effectiveness, being cost-effective in 70 % of simulations during probabilistic sensitivity analysis. We evaluated the cost-effectiveness of training rural providers to identify and treat veterans at risk for fragility fractures relative to referring these patients to an urban medical center for specialist care. The model evaluated the impact of training on patient life years, quality-adjusted life years (QALYs), treatment rates, fracture incidence, and costs from the perspective of the Department of Veterans Affairs. We constructed a Markov microsimulation model to compare costs and outcomes of a hypothetical cohort of veterans seen by rural providers. Parameter estimates were derived from previously published studies, and we conducted one-way and probabilistic sensitivity analyses on the parameter inputs. Base-case analysis showed that training resulted in no additional costs and an extra 0.083 life years (0.054 QALYs). Our model projected that as a result of training, more patients with osteoporosis would receive treatment (81.3 vs. 12.2 %), and all patients would have a lower incidence of fractures per 1,000 patient years (hip, 1.628 vs. 1.913; clinical vertebral, 0.566 vs. 1.037) when seen by a trained provider compared to an untrained provider. Results remained consistent in one-way sensitivity analysis and in probabilistic sensitivity analyses, training rural providers was cost-effective (less than $50,000/QALY) in 70 % of the simulations. Training rural providers to identify and treat veterans at risk for fragility fractures has a potential to be cost-effective, but the results are sensitive to small differences in effectiveness. It appears that provider education alone is not enough to make a significant difference in fragility fracture rates among veterans.
Satellite Power System: Concept development and evaluation program. Volume 7: Space transportation
NASA Technical Reports Server (NTRS)
1981-01-01
During the several phases of the satellite power system (SPS) concept definition study, various transportation system elements were synthesized and evaluated on the basis of their potential to satisfy overall SPS transportation requirements and their sensitivities, interfaces, and impact on the SPS. Additional analyses and investigations were conducted to further define transportation system concepts that will be needed for the developmental and operational phases of an SPS program. To accomplish these objectives, transportation systems such as the shuttle and its derivatives were identified; new heavy-lift launch vehicle (HLLV) concepts, cargo and personnel orbital transfer vehicles (COTV and POTV), and intra-orbit transfer vehicle (IOTV) concepts were evaluated; and, to a limited degree, the program implications of their operations and costs were assessed. The results of these analyses were integrated into other elements of the overall SPS concept definition studies.
Matsunaga, Hiroko; Goto, Mari; Arikawa, Koji; Shirai, Masataka; Tsunoda, Hiroyuki; Huang, Huan; Kambara, Hideki
2015-02-15
Analyses of gene expressions in single cells are important for understanding detailed biological phenomena. Here, a highly sensitive and accurate method by sequencing (called "bead-seq") to obtain a whole gene expression profile for a single cell is proposed. A key feature of the method is to use a complementary DNA (cDNA) library on magnetic beads, which enables adding washing steps to remove residual reagents in a sample preparation process. By adding the washing steps, the next steps can be carried out under the optimal conditions without losing cDNAs. Error sources were carefully evaluated to conclude that the first several steps were the key steps. It is demonstrated that bead-seq is superior to the conventional methods for single-cell gene expression analyses in terms of reproducibility, quantitative accuracy, and biases caused during sample preparation and sequencing processes. Copyright © 2014 Elsevier Inc. All rights reserved.
Compliance and stress sensitivity of spur gear teeth
NASA Technical Reports Server (NTRS)
Cornell, R. W.
1983-01-01
The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.
Uhlig, Annemarie; Strauss, Arne; Seif Amir Hosseini, Ali; Lotz, Joachim; Trojan, Lutz; Schmid, Marianne; Uhlig, Johannes
2017-09-06
The incidence of urothelial carcinoma of the bladder (UCB) is lower in women; however, women tend to present with more advanced disease. To date, there is no quantitative synthesis of studies reporting gender-specific outcomes in non-muscle-invasive UCB. To conduct a meta-analysis evaluating gender-specific differences in recurrence of non-muscle-invasive urinary bladder cancer (NMIBC). An unrestricted systematic literature search of the MEDLINE, EMBASE, and Cochrane libraries was conducted. Studies evaluating the impact of gender on disease recurrence after local treatment of NMIBC using multivariable Cox proportional hazard models were included. Random effect meta-analysis, subgroup analyses, meta-influence, and cumulative meta-analyses were conducted. Publication bias was assessed via a funnel plot and Eggeŕs test. Of 609 studies screened, 27 comprising 23 754 patients were included. Random effect meta-analyses indicated women at increased risk for UCB recurrence compared with men (hazard ratio [HR]=1.11, 95% confidence interval [CI]: 1.01-1.23, p=0.03). Subgroup analyses yielded estimates between HR=0.99 and HR=1.68. Gender-specific differences in UCB recurrence were most pronounced in studies administering exclusively bacillus Calmette-Guerin (BCG; HR=1.64, 95% CI: 1.13-2.39, p=0.01), especially in a long-term treatment regimen (HR=1.68, 95% CI: 1.32-2.15, p<0.001). Sensitivity analyses confirmed female patients at increased risk for UCB recurrence. Women are at increased risk for disease recurrence after local treatment of NMIBC compared with male patients. Reduced effectiveness of BCG treatment might underlie this observation. Gender-specific differences were evident across various subgroups and proved robust upon sensitivity analyses. In this report, we combined several studies on gender-specific differences in relapse of superficial bladder cancer. Women were more likely to experience cancer relapse than men. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Coal gasification systems engineering and analysis, volume 2
NASA Technical Reports Server (NTRS)
1980-01-01
The major design related features of each generic plant system were characterized in a catalog. Based on the catalog and requirements data, approximately 17 designs and cost estimates were developed for MBG and alternate products. A series of generic trade studies was conducted to support all of the design studies. A set of cost and programmatic analyses were conducted to supplement the designs. The cost methodology employed for the design and sensitivity studies was documented and implemented in a computer program. Plant design and construction schedules were developed for the K-T, Texaco, and B&W MBG plant designs. A generic work breakdown structure was prepared, based on the K-T design, to coincide with TVA's planned management approach. An extensive set of cost sensitivity analyses was completed for K-T, Texaco, and B&W design. Product price competitiveness was evaluated for MBG and the alternate products. A draft management policy and procedures manual was evaluated. A supporting technology development plan was developed to address high technology risk issues. The issues were identified and ranked in terms of importance and tractability, and a plan developed for obtaining data or developing technology required to mitigate the risk.
Optical imaging of RNAi-mediated silencing of cancer
NASA Astrophysics Data System (ADS)
Ochiya, Takahiro; Honma, Kimi; Takeshita, Fumitaka; Nagahara, Shunji
2008-02-01
RNAi has rapidly become a powerful tool for drug target discovery and validation in an in vitro culture system and, consequently, interest is rapidly growing for extension of its application to in vivo systems, such as animal disease models and human therapeutics. Cancer is one obvious application for RNAi therapeutics, because abnormal gene expression is thought to contribute to the pathogenesis and maintenance of the malignant phenotype of cancer and thereby many oncogenes and cell-signaling molecules present enticing drug target possibilities. RNAi, potent and specific, could silence tumor-related genes and would appear to be a rational approach to inhibit tumor growth. In subsequent in vivo studies, the appropriate cancer model must be developed for an evaluation of siRNA effects on tumors. How to evaluate the effect of siRNA in an in vivo therapeutic model is also important. Accelerating the analyses of these models and improving their predictive value through whole animal imaging methods, which provide cancer inhibition in real time and are sensitive to subtle changes, are crucial for rapid advancement of these approaches. Bioluminescent imaging is one of these optically based imaging methods that enable rapid in vivo analyses of a variety of cellular and molecular events with extreme sensitivity.
Evaluation of an optoacoustic based gas analysing device
NASA Astrophysics Data System (ADS)
Markmann, Janine; Lange, Birgit; Theisen-Kunde, Dirk; Danicke, Veit; Mayorov, Fedor; Eckert, Sebastian; Kettmann, Pascal; Brinkmann, Ralf
2017-07-01
The relative occurrence of volatile organic compounds in the human respiratory gas is disease-specific (ppb range). A prototype of a gas analysing device using two tuneable laser systems, an OPO-laser (2.5 to 10 μm) and a CO2-laser (9 to 11 μm), and an optoacoustic measurement cell was developed to detect concentrations in the ppb range. The sensitivity and resolution of the system was determined by test gas measurements, measuring ethylene and sulfur hexafluoride with the CO2-laser and butane with the OPO-laser. System sensitivity found to be 13 ppb for sulfur hexafluoride, 17 ppb for ethylene and <10 ppb for butane, with a resolution of 50 ppb at minimum for sulfur hexafluoride. Respiratory gas samples of 8 healthy volunteers were investigated by irradiation with 17 laser lines of the CO2-laser. Several of those lines overlap with strong absorption bands of ammonia. As it is known that ammonia concentration increases by age a separation of people <35 und >35 was striven for. To evaluate the data the first seven gas samples were used to train a discriminant analysis algorithm. The eighth subject was then assigned correctly to the group >35 years with the age of 49 years.
A Circular Microstrip Antenna Sensor for Direction Sensitive Strain Evaluation.
Lopato, Przemyslaw; Herbko, Michal
2018-01-20
In this paper, a circular microstrip antenna for stress evaluation is studied. This kind of microstrip sensor can be utilized in structural health monitoring systems. Reflection coefficient S 11 is measured to determine deformation/strain value. The proposed sensor is adhesively connected to the studied sample. Applied strain causes a change in patch geometry and influences current distribution both in patch and ground plane. Changing the current flow in patch influences the value of resonant frequency. In this paper, two different resonant frequencies were analysed because in each case, different current distributions in patch were obtained. The sensor was designed for operating frequency of 2.5 GHz (at fundamental mode), which results in a diameter less than 55 mm. Obtained sensitivity was up to 1 MHz/100 MPa, resolution depends on utilized vector network analyser. Moreover, the directional characteristics for both resonant frequencies were defined, studied using numerical model and verified by measurements. Thus far, microstrip antennas have been used in deformation measurement only if the direction of external force was well known. Obtained directional characteristics of the sensor allow the determination of direction and value of stress by one sensor. This method of measurement can be an alternative to the rosette strain gauge.
Macera, Annalisa; Lario, Chiara; Petracchini, Massimo; Gallo, Teresa; Regge, Daniele; Floriani, Irene; Ribero, Dario; Capussotti, Lorenzo; Cirillo, Stefano
2013-03-01
To compare the diagnostic accuracy and sensitivity of Gd-EOB-DTPA MRI and diffusion-weighted (DWI) imaging alone and in combination for detecting colorectal liver metastases in patients who had undergone preoperative chemotherapy. Thirty-two consecutive patients with a total of 166 liver lesions were retrospectively enrolled. Of the lesions, 144 (86.8 %) were metastatic at pathology. Three image sets (1, Gd-EOB-DTPA; 2, DWI; 3, combined Gd-EOB-DTPA and DWI) were independently reviewed by two observers. Statistical analysis was performed on a per-lesion basis. Evaluation of image set 1 correctly identified 127/166 lesions (accuracy 76.5 %; 95 % CI 69.3-82.7) and 106/144 metastases (sensitivity 73.6 %, 95 % CI 65.6-80.6). Evaluation of image set 2 correctly identified 108/166 (accuracy 65.1 %, 95 % CI 57.3-72.3) and 87/144 metastases (sensitivity of 60.4 %, 95 % CI 51.9-68.5). Evaluation of image set 3 correctly identified 148/166 (accuracy 89.2 %, 95 % CI 83.4-93.4) and 131/144 metastases (sensitivity 91 %, 95 % CI 85.1-95.1). Differences were statistically significant (P < 0.001). Notably, similar results were obtained analysing only small lesions (<1 cm). The combination of DWI with Gd-EOB-DTPA-enhanced MRI imaging significantly increases the diagnostic accuracy and sensitivity in patients with colorectal liver metastases treated with preoperative chemotherapy, and it is particularly effective in the detection of small lesions.
Accurate clinical detection of exon copy number variants in a targeted NGS panel using DECoN.
Fowler, Anna; Mahamdallie, Shazia; Ruark, Elise; Seal, Sheila; Ramsay, Emma; Clarke, Matthew; Uddin, Imran; Wylie, Harriet; Strydom, Ann; Lunter, Gerton; Rahman, Nazneen
2016-11-25
Background: Targeted next generation sequencing (NGS) panels are increasingly being used in clinical genomics to increase capacity, throughput and affordability of gene testing. Identifying whole exon deletions or duplications (termed exon copy number variants, 'exon CNVs') in exon-targeted NGS panels has proved challenging, particularly for single exon CNVs. Methods: We developed a tool for the Detection of Exon Copy Number variants (DECoN), which is optimised for analysis of exon-targeted NGS panels in the clinical setting. We evaluated DECoN performance using 96 samples with independently validated exon CNV data. We performed simulations to evaluate DECoN detection performance of single exon CNVs and to evaluate performance using different coverage levels and sample numbers. Finally, we implemented DECoN in a clinical laboratory that tests BRCA1 and BRCA2 with the TruSight Cancer Panel (TSCP). We used DECoN to analyse 1,919 samples, validating exon CNV detections by multiplex ligation-dependent probe amplification (MLPA). Results: In the evaluation set, DECoN achieved 100% sensitivity and 99% specificity for BRCA exon CNVs, including identification of 8 single exon CNVs. DECoN also identified 14/15 exon CNVs in 8 other genes. Simulations of all possible BRCA single exon CNVs gave a mean sensitivity of 98% for deletions and 95% for duplications. DECoN performance remained excellent with different levels of coverage and sample numbers; sensitivity and specificity was >98% with the typical NGS run parameters. In the clinical pipeline, DECoN automatically analyses pools of 48 samples at a time, taking 24 minutes per pool, on average. DECoN detected 24 BRCA exon CNVs, of which 23 were confirmed by MLPA, giving a false discovery rate of 4%. Specificity was 99.7%. Conclusions: DECoN is a fast, accurate, exon CNV detection tool readily implementable in research and clinical NGS pipelines. It has high sensitivity and specificity and acceptable false discovery rate. DECoN is freely available at www.icr.ac.uk/decon.
SUNBURN, SUN EXPOSURE, AND SUN SENSITIVITY IN THE STUDY OF NEVI IN CHILDREN
Satagopan, Jaya M; Oliveria, Susan A; Arora, Arshi; Marchetti, Michael A; Orlow, Irene; Dusza, Stephen W; Weinstock, Martin A; Scope, Alon; Geller, Alan C; Marghoob, Ashfaq A; Halpern, Allan C
2015-01-01
Purpose To examine the joint effect of sun exposure and sunburn on nevus counts (on the natural logarithm scale; log nevi) and the role of sun sensitivity. Methods We describe an analysis of cross-sectional data from 443 children enrolled in the prospective study of nevi in children. To evaluate the joint effect, we partitioned the sum of squares due to interaction between sunburn and sun exposure into orthogonal components representing: (i) monotonic increase in log nevi with increasing sun exposure (rate of increase of log nevi depends upon sunburn), and (ii) non-monotonic pattern. Results In unadjusted analyses, there was a marginally significant monotonic pattern of interaction (p-value = 0.08). In adjusted analyses, sun exposure was associated with higher log nevi among those without sunburn (p < 0.001), but not among those with sunburn (p = 0.14). Sunburn was independently associated with log nevi (p = 0.02), even though sun sensitivity explained 29% (95% CI: 2%-56%, p = 0.04) of its effect. Children with high sun sensitivity and sunburn had more nevi, regardless of sun exposure. Conclusions A program of increasing sun protection in early childhood as a strategy for reducing nevi, when applied to the general population, may not equally benefit everyone. PMID:26096189
Sunburn, sun exposure, and sun sensitivity in the Study of Nevi in Children.
Satagopan, Jaya M; Oliveria, Susan A; Arora, Arshi; Marchetti, Michael A; Orlow, Irene; Dusza, Stephen W; Weinstock, Martin A; Scope, Alon; Geller, Alan C; Marghoob, Ashfaq A; Halpern, Allan C
2015-11-01
To examine the joint effect of sun exposure and sunburn on nevus counts (on the natural logarithm scale; log nevi) and the role of sun sensitivity. We describe an analysis of cross-sectional data from 443 children enrolled in the prospective Study of Nevi in Children. To evaluate the joint effect, we partitioned the sum of squares because of interaction between sunburn and sun exposure into orthogonal components representing (1) monotonic increase in log nevi with increasing sun exposure (rate of increase of log nevi depends on sunburn), and (2) nonmonotonic pattern. In unadjusted analyses, there was a marginally significant monotonic pattern of interaction (P = .08). In adjusted analyses, sun exposure was associated with higher log nevi among those without sunburn (P < .001), but not among those with sunburn (P = .14). Sunburn was independently associated with log nevi (P = .02), even though sun sensitivity explained 29% (95% confidence interval: 2%-56%, P = .04) of its effect. Children with high sun sensitivity and sunburn had more nevi, regardless of sun exposure. A program of increasing sun protection in early childhood as a strategy for reducing nevi, when applied to the general population, may not equally benefit everyone. Copyright © 2015 Elsevier Inc. All rights reserved.
Hypoglycemia alarm enhancement using data fusion.
Skladnev, Victor N; Tarnavskii, Stanislav; McGregor, Thomas; Ghevondian, Nejhdeh; Gourlay, Steve; Jones, Timothy W
2010-01-01
The acceptance of closed-loop blood glucose (BG) control using continuous glucose monitoring systems (CGMS) is likely to improve with enhanced performance of their integral hypoglycemia alarms. This article presents an in silico analysis (based on clinical data) of a modeled CGMS alarm system with trained thresholds on type 1 diabetes mellitus (T1DM) patients that is augmented by sensor fusion from a prototype hypoglycemia alarm system (HypoMon). This prototype alarm system is based on largely independent autonomic nervous system (ANS) response features. Alarm performance was modeled using overnight BG profiles recorded previously on 98 T1DM volunteers. These data included the corresponding ANS response features detected by HypoMon (AiMedics Pty. Ltd.) systems. CGMS data and alarms were simulated by applying a probabilistic model to these overnight BG profiles. The probabilistic model developed used a mean response delay of 7.1 minutes, measurement error offsets on each sample of +/- standard deviation (SD) = 4.5 mg/dl (0.25 mmol/liter), and vertical shifts (calibration offsets) of +/- SD = 19.8 mg/dl (1.1 mmol/liter). Modeling produced 90 to 100 simulated measurements per patient. Alarm systems for all analyses were optimized on a training set of 46 patients and evaluated on the test set of 56 patients. The split between the sets was based on enrollment dates. Optimization was based on detection accuracy but not time to detection for these analyses. The contribution of this form of data fusion to hypoglycemia alarm performance was evaluated by comparing the performance of the trained CGMS and fused data algorithms on the test set under the same evaluation conditions. The simulated addition of HypoMon data produced an improvement in CGMS hypoglycemia alarm performance of 10% at equal specificity. Sensitivity improved from 87% (CGMS as stand-alone measurement) to 97% for the enhanced alarm system. Specificity was maintained constant at 85%. Positive predictive values on the test set improved from 61 to 66% with negative predictive values improving from 96 to 99%. These enhancements were stable within sensitivity analyses. Sensitivity analyses also suggested larger performance increases at lower CGMS alarm performance levels. Autonomic nervous system response features provide complementary information suitable for fusion with CGMS data to enhance nocturnal hypoglycemia alarms. 2010 Diabetes Technology Society.
Lin, Yuning; Chen, Ziqian; Yang, Xizhang; Zhong, Qun; Zhang, Hongwen; Yang, Li; Xu, Shangwen; Li, Hui
2013-12-01
The aim of this study is to evaluate the diagnostic performance of multidetector CT angiography (CTA) in depicting bronchial and non-bronchial systemic arteries in patients with haemoptysis and to assess whether this modality helps determine the feasibility of angiographic embolisation. Fifty-two patients with haemoptysis between January 2010 and July 2011 underwent both preoperative multidetector CTA and digital subtraction angiography (DSA) imaging. Diagnostic performance of CTA in depicting arteries causing haemoptysis was assessed on a per-patient and a per-artery basis. The feasibility of the endovascular treatment evaluated by CTA was analysed. Sensitivity, specificity, and positive and negative predictive values for those analyses were determined. Fifty patients were included in the artery-presence-number analysis. In the per-patient analysis, neither CTA (P = 0.25) nor DSA (P = 1.00) showed statistical difference in the detection of arteries causing haemoptysis. The sensitivity, specificity, and positive and negative predictive values were 94%, 100%, 100%, and 40%, respectively, for the presence of pathologic arteries evaluated by CTA, and 98%, 100%, 100%, and 67%, respectively, for DSA. On the per-artery basis, CTA correctly identified 97% (107/110). Fifty-two patients were included in the feasibility analysis. The performance of CTA in predicting the feasibility of angiographic embolisation was not statistically different from the treatment performed (P = 1.00). The sensitivity, specificity, and positive and negative predictive values were 96%, 80%, 98% and 67%, respectively, for CTA. Multidetector CTA is an accurate imaging method in depicting the presence and number of arteries causing haemoptysis. This modality is also useful for determining the feasibility of angiographic embolisation for haemoptysis. © 2013 The Authors. Journal of Medical Imaging and Radiation Oncology © 2013 The Royal Australian and New Zealand College of Radiologists.
Ballesteros Peña, Sendoa
2013-04-01
To estimate the frequency of therapeutic errors and to evaluate the diagnostic accuracy in the recognition of shockable rhythms by automated external defibrillators. A retrospective descriptive study. Nine basic life support units from Biscay (Spain). Included 201 patients with cardiac arrest, since 2006 to 2011. The study was made of the suitability of treatment (shock or not) after each analysis and medical errors identified. The sensitivity, specificity and predictive values with 95% confidence intervals were then calculated. A total of 811 electrocardiographic rhythm analyses were obtained, of which 120 (14.1%), from 30 patients, corresponded to shockable rhythms. Sensitivity and specificity for appropriate automated external defibrillators management of a shockable rhythm were 85% (95% CI, 77.5% to 90.3%) and 100% (95% CI, 99.4% to 100%), respectively. Positive and negative predictive values were 100% (95% CI, 96.4% to 100%) and 97.5% (95% CI, 96% to 98.4%), respectively. There were 18 (2.2%; 95% CI, 1.3% to 3.5%) errors associated with defibrillator management, all relating to cases of shockable rhythms that were not shocked. One error was operator dependent, 6 were defibrillator dependent (caused by interaction of pacemakers), and 11 were unclassified. Automated external defibrillators have a very high specificity and moderately high sensitivity. There are few operator dependent errors. Implanted pacemakers interfere with defibrillator analyses. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Diagnostic accuracy of physical examination for anterior knee instability: a systematic review.
Leblanc, Marie-Claude; Kowalczuk, Marcin; Andruszkiewicz, Nicole; Simunovic, Nicole; Farrokhyar, Forough; Turnbull, Travis Lee; Debski, Richard E; Ayeni, Olufemi R
2015-10-01
Determining diagnostic accuracy of Lachman, pivot shift and anterior drawer tests versus gold standard diagnosis (magnetic resonance imaging or arthroscopy) for anterior cruciate ligament (ACL) insufficiency cases. Secondarily, evaluating effects of: chronicity, partial rupture, awake versus anaesthetized evaluation. Searching MEDLINE, EMBASE and PubMed identified studies on diagnostic accuracy for ACL insufficiency. Studies identification and data extraction were performed in duplicate. Quality assessment used QUADAS tool, and statistical analyses were completed for pooled sensitivity and specificity. Eight studies were included. Given insufficient data, pooled analysis was only possible for sensitivity on Lachman and pivot shift test. During awake evaluation, sensitivity for the Lachman test was 89 % (95 % CI 0.76, 0.98) for all rupture types, 96 % (95 % CI 0.90, 1.00) for complete ruptures and 68 % (95 % CI 0.25, 0.98) for partial ruptures. For pivot shift in awake evaluation, results were 79 % (95 % CI 0.63, 0.91) for all rupture types, 86 % (95 % CI 0.68, 0.99) for complete ruptures and 67 % (95 % CI 0.47, 0.83) for partial ruptures. Decreased sensitivity of Lachman and pivot shift tests for partial rupture cases and for awake patients raised suspicions regarding the accuracy of these tests for diagnosis of ACL insufficiency. This may lead to further research aiming to improve the understanding of the true accuracy of these physical diagnostic tests and increase the reliability of clinical investigation for this pathology. IV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruan, D; Shao, W; Low, D
Purpose: To evaluate and test the hypothesis that plan quality may be systematically affected by treatment delivery techniques and target-tocritical structure geometric relationship in radiotherapy for brain tumor. Methods: Thirty-four consecutive brain tumor patients treated between 2011–2014 were analyzed. Among this cohort, 10 were planned with 3DCRT, 11 with RadipArc, and 13 with helical IMRT on TomoTherapy. The selected dosimetric endpoints (i.e., PTV V100, maximum brainstem/chiasm/ optic nerve doses) were considered as a vector in a highdimensional space. A Pareto analysis was performed to identify the subset of Pareto-efficient plans.The geometric relationships, specifically the overlapping volume and centroid-of-mass distance betweenmore » each critical structure to the PTV were extracted as potential geometric features. The classification-tree analyses were repeated using these geometric features with and without the treatment modality as an additional categorical predictor. In both scenarios, the dominant features to prognosticate the Pareto membership were identified and the tree structures to provide optimal inference were recorded. The classification performance was further analyzed to determine the role of treatment modality in affecting plan quality. Results: Seven Pareto-efficient plans were identified based on dosimetric endpoints (3 from 3DCRT, 3 from RapicArc, 1 from Tomo), which implies that the evaluated treatment modality may have a minor influence on plan quality. Classification trees with/without the treatment modality as a predictor both achieved accuracy of 88.2%: with 100% sensitivity and 87.1% specificity for the former, and 66.7% sensitivity and 96.0% specificity for the latter. The coincidence of accuracy from both analyses further indicates no-to-weak dependence of plan quality on treatment modality. Both analyses have identified the brainstem to PTV distance as the primary predictive feature for Pareto-efficiency. Conclusion: Pareto evaluation and classification-tree analyses have indicated that plan quality depends strongly on geometry for brain tumor, specifically PTV-tobrain-stem-distance but minimally on treatment modality.« less
Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C
2017-12-01
To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.
Magyar, Caroline I; Pandolfi, Vincent; Dill, Charles A
2012-02-01
This study investigated the psychometric properties of the Social Communication Questionnaire (SCQ) in a sample of children with Down syndrome (DS), many of whom had a co-occurring autism spectrum disorder (ASD). The SCQ is a widely used ASD screening measure; however, its measurement properties have not been comprehensively evaluated specifically in children with DS, a group that seems to be at higher risk for an ASD. Exploratory and confirmatory factor analyses, scale reliability, convergent and discriminant correlations, significance tests between groups of children with DS and DS + ASD, and diagnostic accuracy analyses were conducted. Factor analyses identified 2 reliable factors that we labeled Social-Communication and Stereotyped Behavior and Unusual Interests. Pearson correlations with Autism Diagnostic Interview-Revised subscales indicated support for the SCQ's convergent validity and some support for the discriminant validity of the factor-based scales. Significance tests and receiver operating characteristic analyses indicated that children with DS + ASD obtained significantly higher SCQ factor-based and total scores than children with DS alone, and that the SCQ Total Score evidenced good sensitivity and adequate specificity. Results indicated initial psychometric support for the SCQ as an ASD screening measure in children with DS. The SCQ should be considered as part of a multimethod evaluation when screening children with DS.
Race, Ancestry, and Development of Food-Allergen Sensitization in Early Childhood
Tsai, Hui-Ju; Hong, Xiumei; Liu, Xin; Wang, Guoying; Pearson, Colleen; Ortiz, Katherin; Fu, Melanie; Pongracic, Jacqueline A.; Bauchner, Howard; Wang, Xiaobin
2011-01-01
OBJECTIVE: We examined whether the risk of food-allergen sensitization varied according to self-identified race or genetic ancestry. METHODS: We studied 1104 children (mean age: 2.7 years) from an urban multiethnic birth cohort. Food sensitization was defined as specific immunoglobulin E (sIgE) levels of ≥0.35 kilo–units of allergen (kUA)/L for any of 8 common food allergens. Multivariate logistic regression analyses were used to evaluate the associations of self-identified race and genetic ancestry with food sensitization. Analyses also examined associations with numbers of food sensitizations (0, 1 or 2, and ≥3 foods) and with logarithmically transformed allergen sIgE levels. RESULTS: In this predominantly minority cohort (60.9% black and 22.5% Hispanic), 35.5% of subjects exhibited food sensitizations. In multivariate models, both self-reported black race (odds ratio [OR]: 2.34 [95% confidence interval [CI]: 1.24–4.44]) and African ancestry (in 10% increments; OR: 1.07 [95% CI: 1.02–1.14]) were associated with food sensitization. Self-reported black race (OR: 3.76 [95% CI: 1.09–12.97]) and African ancestry (OR: 1.19 [95% CI: 1.07–1.32]) were associated with a high number (≥3) of food sensitizations. African ancestry was associated with increased odds of peanut sIgE levels of ≥5 kUA/L (OR: 1.25 [95% CI: 1.01–1.52]). Similar ancestry associations were seen for egg sIgE levels of ≥2 kUA/L (OR: 1.13 [95% CI: 1.01–1.27]) and milk sIgE levels of ≥5 kUA/L (OR: 1.24 [95% CI: 0.94–1.63]), although findings were not significant for milk. CONCLUSIONS: Black children were more likely to be sensitized to food allergens and were sensitized to more foods. African ancestry was associated with peanut sensitization. PMID:21890831
NASA Technical Reports Server (NTRS)
Lin, Xin; Zhang, Sara Q.; Zupanski, M.; Hou, Arthur Y.; Zhang, J.
2015-01-01
High-frequency TMI and AMSR-E radiances, which are sensitive to precipitation over land, are assimilated into the Goddard Weather Research and Forecasting Model- Ensemble Data Assimilation System (WRF-EDAS) for a few heavy rain events over the continental US. Independent observations from surface rainfall, satellite IR brightness temperatures, as well as ground-radar reflectivity profiles are used to evaluate the impact of assimilating rain-sensitive radiances on cloud and precipitation within WRF-EDAS. The evaluations go beyond comparisons of forecast skills and domain-mean statistics, and focus on studying the cloud and precipitation features in the jointed rainradiance and rain-cloud space, with particular attentions on vertical distributions of height-dependent cloud types and collective effect of cloud hydrometers. Such a methodology is very helpful to understand limitations and sources of errors in rainaffected radiance assimilations. It is found that the assimilation of rain-sensitive radiances can reduce the mismatch between model analyses and observations by reasonably enhancing/reducing convective intensity over areas where the observation indicates precipitation, and suppressing convection over areas where the model forecast indicates rain but the observation does not. It is also noted that instead of generating sufficient low-level warmrain clouds as in observations, the model analysis tends to produce many spurious upperlevel clouds containing small amount of ice water content. This discrepancy is associated with insufficient information in ice-water-sensitive radiances to address the vertical distribution of clouds with small amount of ice water content. Such a problem will likely be mitigated when multi-channel multi-frequency radiances/reflectivity are assimilated over land along with sufficiently accurate surface emissivity information to better constrain the vertical distribution of cloud hydrometers.
Laser Therapy in the Treatment of Paresthesia: A Retrospective Study of 125 Clinical Cases.
de Oliveira, Renata Ferreira; da Silva, Alessandro Costa; Simões, Alyne; Youssef, Michel Nicolau; de Freitas, Patrícia Moreira
2015-08-01
The aim of this retrospective study was to evaluate the effectiveness of laser therapy for acceleration and recovery of nerve sensitivity after orthognathic or minor oral surgeries, by analysis of clinical records of patients treated at the Special Laboratory of Lasers in Dentistry (LELO, School of Dentistry, University of São Paulo), throughout the period 2007-2013. Nerve tissue lesions may occur during various dental and routine surgical procedures, resulting in paresthesia. Laser therapy has been shown to be able to accelerate and enhance the regeneration of the affected nerve tissue; however, there are few studies in the literature that evaluate the effects of treatment with low-power laser on neural changes after orthognathic or minor oral surgeries. A total of 125 clinical records were included, and the data on gender, age, origin of the lesion, nerve, interval between surgery and onset of laser therapy, frequency of laser irradiation (one or two times per week), final evolution, and if there was a need to change the irradiation protocol, were all recorded. These data were related to the recovery of sensitivity in the affected nerve area. Descriptive analyses and modeling for analysis of categorical data (α=5%) were performed. The results from both analyses showed that the recovery of sensitivity was correlated with patient age (p=0.015) and interval between surgery and onset of laser therapy (p=0.002). Within the limits of this retrospective study, it was found that low- power laser therapy with beam emission band in the infrared spectrum (808 nm) can positively affect the recovery of sensitivity after orthognathic or minor oral surgeries.
Li, Te; Liu, Maobai; Ben, He; Xu, Zhenxing; Zhong, Han; Wu, Bin
2015-06-01
Clopidogrel or aspirin are indicated for patients with recent ischemic stroke (IS) or established peripheral artery disease (PAD). We compared the cost effectiveness of clopidogrel with that of aspirin in Chinese patients with recent IS or established PAD. A discrete-event simulation was developed to evaluate the economic implications of secondary prevention with clopidogrel versus aspirin. All available evidence was derived from clinical studies. Costs from a Chinese healthcare perspective in 2013 US dollars and quality-adjusted life-years (QALYs) were projected over patients' lifetimes. Uncertainties were addressed using sensitivity analyses. Compared with aspirin, clopidogrel yielded a marginally increased life expectancy by 0.46 and 0.21 QALYs at an incremental cost-effectiveness ratio of $US5246 and $US9890 per QALY in patients with recent IS and established PAD, respectively. One-way sensitivity analyses showed that the evaluation of patients with PAD and recent IS was robust except for the parameter of patient age. Given a willingness-to-pay of $US19,877 per QALY gained, clopidogrel had a probability of 90 and 68% of being cost effective in the recent IS or established PAD subgroups compared with aspirin, respectively. The analysis suggests that clopidogrel for secondary prevention is cost effective for patients with either PAD or recent IS in a Chinese setting in comparison with aspirin.
Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.
2004-01-01
Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.
Barron, Daniel S; Fox, Peter T; Pardoe, Heath; Lancaster, Jack; Price, Larry R; Blackmon, Karen; Berry, Kristen; Cavazos, Jose E; Kuzniecky, Ruben; Devinsky, Orrin; Thesen, Thomas
2015-01-01
Noninvasive markers of brain function could yield biomarkers in many neurological disorders. Disease models constrained by coordinate-based meta-analysis are likely to increase this yield. Here, we evaluate a thalamic model of temporal lobe epilepsy that we proposed in a coordinate-based meta-analysis and extended in a diffusion tractography study of an independent patient population. Specifically, we evaluated whether thalamic functional connectivity (resting-state fMRI-BOLD) with temporal lobe areas can predict seizure onset laterality, as established with intracranial EEG. Twenty-four lesional and non-lesional temporal lobe epilepsy patients were studied. No significant differences in functional connection strength in patient and control groups were observed with Mann-Whitney Tests (corrected for multiple comparisons). Notwithstanding the lack of group differences, individual patient difference scores (from control mean connection strength) successfully predicted seizure onset zone as shown in ROC curves: discriminant analysis (two-dimensional) predicted seizure onset zone with 85% sensitivity and 91% specificity; logistic regression (four-dimensional) achieved 86% sensitivity and 100% specificity. The strongest markers in both analyses were left thalamo-hippocampal and right thalamo-entorhinal cortex functional connection strength. Thus, this study shows that thalamic functional connections are sensitive and specific markers of seizure onset laterality in individual temporal lobe epilepsy patients. This study also advances an overall strategy for the programmatic development of neuroimaging biomarkers in clinical and genetic populations: a disease model informed by coordinate-based meta-analysis was used to anatomically constrain individual patient analyses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebinger, M.H.; Beckman, R.J.; Myers, O.B.
1996-09-01
The purpose of this study was to evaluate the immediate and long-term consequences of depleted uranium (DU) in the environment at Aberdeen Proving Ground (APG) and Yuma Proving Ground (YPG) for the Test and Evaluation Command (TECOM) of the US Army. Specifically, we examined the potential for adverse radiological and toxicological effects to humans and ecosystems caused by exposure to DU at both installations. We developed contaminant transport models of aquatic and terrestrial ecosystems at APG and terrestrial ecosystems at YPG to assess potential adverse effects from DU exposure. Sensitivity and uncertainty analyses of the initial models showed the portionsmore » of the models that most influenced predicted DU concentrations, and the results of the sensitivity analyses were fundamental tools in designing field sampling campaigns at both installations. Results of uranium (U) isotope analyses of field samples provided data to evaluate the source of U in the environment and the toxicological and radiological doses to different ecosystem components and to humans. Probabilistic doses were estimated from the field data, and DU was identified in several components of the food chain at APG and YPG. Dose estimates from APG data indicated that U or DU uptake was insufficient to cause adverse toxicological or radiological effects. Dose estimates from YPG data indicated that U or DU uptake is insufficient to cause radiological effects in ecosystem components or in humans, but toxicological effects in small mammals (e.g., kangaroo rats and pocket mice) may occur from U or DU ingestion. The results of this study were used to modify environmental radiation monitoring plans at APG and YPG to ensure collection of adequate data for ongoing ecological and human health risk assessments.« less
Experiment Evaluation of Bifurcation in Sands
NASA Technical Reports Server (NTRS)
Alshibi, Khalid A.; Sture, Stein
2000-01-01
The basic principles of bifurcation analysis have been established by several investigators, however several issues remain unresolved, specifically how do stress level, grain size distribution, and boundary conditions affect general bifurcation phenomena in pressure sensitive and dilatant materials. General geometrical and kinematics conditions for moving surfaces of discontinuity was derived and applied to problems of instability of solids. In 1962, the theoretical framework of bifurcation by studying the acceleration waves in elasto-plastic (J2) solids were presented. Bifurcation analysis for more specific forms of constitutive behavior was examined by studying localization in pressure-sensitive, dilatant materials, however, analyses were restricted to plane deformation states only. Bifurcation analyses were presented and applied to predict shear band formations in sand under plane strain condition. The properties of discontinuous bifurcation solutions for elastic-plastic solids under axisymmetric and plane strain loading conditions were studied. The study focused on theory, but also references and comparisons to experiments were made. The current paper includes a presentation of a summary of bifurcation analyses for biaxial and triaxial (axisymmetric) loading conditions. The Coulomb model is implemented using incremental piecewise scheme to predict the constitutive relations and shear band inclination angles. Then, a comprehensive evaluation of bifurcation phenomena is presented based on data from triaxial experiments performed under microgravity conditions aboard the Space Shuttle under very low effective confining pressure (0.05 to 1.30 kPa), in which very high peak friction angles (47 to 75 degrees) and dilatancy angles (30 to 31 degrees) were measured. The evaluation will be extended to include biaxial experiments performed on the same material under low (10 kPa) and moderate (100 kPa) confining pressures. A comparison between the behavior under biaxial and triaxial loading conditions will be presented, and related issues concerning influence of confining pressure will be discussed.
Schot, Marjolein J C; van Delft, Sanne; Kooijman-Buiting, Antoinette M J; de Wit, Niek J; Hopstaken, Rogier M
2015-05-18
Various point-of-care testing (POCT) urine analysers are commercially available for routine urine analysis in general practice. The present study compares analytical performance, agreement and user-friendliness of six different POCT urine analysers for diagnosing urinary tract infection in general practice. All testing procedures were performed at a diagnostic centre for primary care in the Netherlands. Urine samples were collected at four general practices. Analytical performance and agreement of the POCT analysers regarding nitrite, leucocytes and erythrocytes, with the laboratory reference standard, was the primary outcome measure, and analysed by calculating sensitivity, specificity, positive and negative predictive value, and Cohen's κ coefficient for agreement. Secondary outcome measures were the user-friendliness of the POCT analysers, in addition to other characteristics of the analysers. The following six POCT analysers were evaluated: Uryxxon Relax (Macherey Nagel), Urisys 1100 (Roche), Clinitek Status (Siemens), Aution 11 (Menarini), Aution Micro (Menarini) and Urilyzer (Analyticon). Analytical performance was good for all analysers. Compared with laboratory reference standards, overall agreement was good, but differed per parameter and per analyser. Concerning the nitrite test, the most important test for clinical practice, all but one showed perfect agreement with the laboratory standard. For leucocytes and erythrocytes specificity was high, but sensitivity was considerably lower. Agreement for leucocytes varied between good to very good, and for the erythrocyte test between fair and good. First-time users indicated that the analysers were easy to use. They expected higher productivity and accuracy when using these analysers in daily practice. The overall performance and user-friendliness of all six commercially available POCT urine analysers was sufficient to justify routine use in suspected urinary tract infections in general practice. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Systematic review of publications on economic evaluations of caries prevention programs.
Mariño, R J; Khan, A R; Morgan, M
2013-01-01
The aim of this study was to perform a systematic review of economic evaluations (EEs) of dental caries prevention programs to objectively retrieve, synthesize and describe available information on the field. Several strategies were combined to search for literature published between January 1975 and April 2012. MEDLINE, EconoLit and ISI formed the basis of the literature search. The study selection was done using predefined inclusion and exclusion criteria. Bibliographic listings of all retrieved articles were hand-searched. The search identified 206 references. An evaluative framework was developed based on the Centre for Reviews and Dissemination's 'Guidance for undertaking reviews in health care' (York University, 2009). Background information included publication vehicle, year of publication, geographic focus, type of preventive program and type of economic analysis. 63 studies were included in the review. The most common preventive strategies evaluated were dental sealants (n = 13), water fluoridation (n = 12) and mixed interventions (n = 12). By type of EE undertaken, 30 were cost-effectiveness analyses, 22 were cost-benefit analyses, and 5 presented both cost-effectiveness and cost-benefit analyses. Few studies were cost-utility analyses (n = 5) or cost minimization analyses (n = 2). By year of publication, most were published after 2003. The review revealed that, although the number of publications reporting EEs has increased significantly in recent years, the quality of the reporting needs to be improved. The main methodological problems identified in the review were the limited information provided on adjustments for discounting in addition to inadequate sensitivity analyses. Attention also needs to be given to the analysis and interpretation of the results of the EEs. Copyright © 2013 S. Karger AG, Basel.
Sensitivity analyses for sparse-data problems-using weakly informative bayesian priors.
Hamra, Ghassan B; MacLehose, Richard F; Cole, Stephen R
2013-03-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist.
Sensitivity Analyses for Sparse-Data Problems—Using Weakly Informative Bayesian Priors
Hamra, Ghassan B.; MacLehose, Richard F.; Cole, Stephen R.
2013-01-01
Sparse-data problems are common, and approaches are needed to evaluate the sensitivity of parameter estimates based on sparse data. We propose a Bayesian approach that uses weakly informative priors to quantify sensitivity of parameters to sparse data. The weakly informative prior is based on accumulated evidence regarding the expected magnitude of relationships using relative measures of disease association. We illustrate the use of weakly informative priors with an example of the association of lifetime alcohol consumption and head and neck cancer. When data are sparse and the observed information is weak, a weakly informative prior will shrink parameter estimates toward the prior mean. Additionally, the example shows that when data are not sparse and the observed information is not weak, a weakly informative prior is not influential. Advancements in implementation of Markov Chain Monte Carlo simulation make this sensitivity analysis easily accessible to the practicing epidemiologist. PMID:23337241
Optimizing chronic disease management mega-analysis: economic evaluation.
2013-01-01
As Ontario's population ages, chronic diseases are becoming increasingly common. There is growing interest in services and care models designed to optimize the management of chronic disease. To evaluate the cost-effectiveness and expected budget impact of interventions in chronic disease cohorts evaluated as part of the Optimizing Chronic Disease Management mega-analysis. Sector-specific costs, disease incidence, and mortality were calculated for each condition using administrative databases from the Institute for Clinical Evaluative Sciences. Intervention outcomes were based on literature identified in the evidence-based analyses. Quality-of-life and disease prevalence data were obtained from the literature. Analyses were restricted to interventions that showed significant benefit for resource use or mortality from the evidence-based analyses. An Ontario cohort of patients with each chronic disease was constructed and followed over 5 years (2006-2011). A phase-based approach was used to estimate costs across all sectors of the health care system. Utility values identified in the literature and effect estimates for resource use and mortality obtained from the evidence-based analyses were applied to calculate incremental costs and quality-adjusted life-years (QALYs). Given uncertainty about how many patients would benefit from each intervention, a system-wide budget impact was not determined. Instead, the difference in lifetime cost between an individual-administered intervention and no intervention was presented. Of 70 potential cost-effectiveness analyses, 8 met our inclusion criteria. All were found to result in QALY gains and cost savings compared with usual care. The models were robust to the majority of sensitivity analyses undertaken, but due to structural limitations and time constraints, few sensitivity analyses were conducted. Incremental cost savings per patient who received intervention ranged between $15 per diabetic patient with specialized nursing to $10,665 per patient wth congestive heart failure receiving in-home care. Evidence used to inform estimates of effect was often limited to a single trial with limited generalizability across populations, interventions, and health care systems. Because of the low clinical fidelity of health administrative data sets, intermediate clinical outcomes could not be included. Cohort costs included an average of all health care costs and were not restricted to costs associated with the disease. Intervention costs were based on resource use specified in clinical trials. Applying estimates of effect from the evidence-based analyses to real-world resource use resulted in cost savings for all interventions. On the basis of quality-of-life data identified in the literature, all interventions were found to result in a greater QALY gain than usual care would. Implementation of all interventions could offer significant cost reductions. However, this analysis was subject to important limitations. Chronic diseases are the leading cause of death and disability in Ontario. They account for a third of direct health care costs across the province. This study aims to evaluate the cost-effectiveness of health care interventions that might improve the management of chronic diseases. The evaluated interventions led to lower costs and better quality of life than usual care. Offering these options could reduce costs per patient. However, the studies used in this analysis were of medium to very low quality, and the methods had many limitations.
Development of the multiple sclerosis (MS) early mobility impairment questionnaire (EMIQ).
Ziemssen, Tjalf; Phillips, Glenn; Shah, Ruchit; Mathias, Adam; Foley, Catherine; Coon, Cheryl; Sen, Rohini; Lee, Andrew; Agarwal, Sonalee
2016-10-01
The Early Mobility Impairment Questionnaire (EMIQ) was developed to facilitate early identification of mobility impairments in multiple sclerosis (MS) patients. We describe the initial development of the EMIQ with a focus on the psychometric evaluation of the questionnaire using classical and item response theory methods. The initial 20-item EMIQ was constructed by clinical specialists and qualitatively tested among people with MS and physicians via cognitive interviews. Data from an observational study was used to make additional updates to the instrument based on exploratory factor analysis (EFA) and item response theory (IRT) analysis, and psychometric analyses were performed to evaluate the reliability and validity of the final instrument's scores and screening properties (i.e., sensitivity and specificity). Based on qualitative interview analyses, a revised 15-item EMIQ was included in the observational study. EFA, IRT and item-to-item correlation analyses revealed redundant items which were removed leading to the final nine-item EMIQ. The nine-item EMIQ performed well with respect to: test-retest reliability (ICC = 0.858); internal consistency (α = 0.893); convergent validity; and known-groups methods for construct validity. A cut-point of 41 on the 0-to-100 scale resulted in sufficient sensitivity and specificity statistics for viably identifying patients with mobility impairment. The EMIQ is a content valid and psychometrically sound instrument for capturing MS patients' experience with mobility impairments in a clinical practice setting. Additional research is suggested to further confirm the EMIQ's screening properties over time.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.
Kepes, Sven; McDaniel, Michael A
2015-01-01
Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.
The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance
2015-01-01
Introduction Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. Methods To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Results Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. Conclusion The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation. PMID:26517553
Distributed Evaluation of Local Sensitivity Analysis (DELSA), with application to hydrologic models
Rakovec, O.; Hill, Mary C.; Clark, M.P.; Weerts, A. H.; Teuling, A. J.; Uijlenhoet, R.
2014-01-01
This paper presents a hybrid local-global sensitivity analysis method termed the Distributed Evaluation of Local Sensitivity Analysis (DELSA), which is used here to identify important and unimportant parameters and evaluate how model parameter importance changes as parameter values change. DELSA uses derivative-based “local” methods to obtain the distribution of parameter sensitivity across the parameter space, which promotes consideration of sensitivity analysis results in the context of simulated dynamics. This work presents DELSA, discusses how it relates to existing methods, and uses two hydrologic test cases to compare its performance with the popular global, variance-based Sobol' method. The first test case is a simple nonlinear reservoir model with two parameters. The second test case involves five alternative “bucket-style” hydrologic models with up to 14 parameters applied to a medium-sized catchment (200 km2) in the Belgian Ardennes. Results show that in both examples, Sobol' and DELSA identify similar important and unimportant parameters, with DELSA enabling more detailed insight at much lower computational cost. For example, in the real-world problem the time delay in runoff is the most important parameter in all models, but DELSA shows that for about 20% of parameter sets it is not important at all and alternative mechanisms and parameters dominate. Moreover, the time delay was identified as important in regions producing poor model fits, whereas other parameters were identified as more important in regions of the parameter space producing better model fits. The ability to understand how parameter importance varies through parameter space is critical to inform decisions about, for example, additional data collection and model development. The ability to perform such analyses with modest computational requirements provides exciting opportunities to evaluate complicated models as well as many alternative models.
Diagnosis of Fanconi anemia in patients with bone marrow failure
Pinto, Fernando O.; Leblanc, Thierry; Chamousset, Delphine; Le Roux, Gwenaelle; Brethon, Benoit; Cassinat, Bruno; Larghero, Jérôme; de Villartay, Jean-Pierre; Stoppa-Lyonnet, Dominique; Baruchel, André; Socié, Gérard; Gluckman, Eliane; Soulier, Jean
2009-01-01
Background Patients with bone marrow failure and undiagnosed underlying Fanconi anemia may experience major toxicity if given standard-dose conditioning regimens for hematopoietic stem cell transplant. Due to clinical variability and/or potential emergence of genetic reversion with hematopoietic somatic mosaicism, a straightforward Fanconi anemia diagnosis can be difficult to make, and diagnostic strategies combining different assays in addition to classical breakage tests in blood may be needed. Design and Methods We evaluated Fanconi anemia diagnosis on blood lymphocytes and skin fibroblasts from a cohort of 87 bone marrow failure patients (55 children and 32 adults) with no obvious full clinical picture of Fanconi anemia, by performing a combination of chromosomal breakage tests, FANCD2-monoubiquitination assays, a new flow cytometry-based mitomycin C sensitivity test in fibroblasts, and, when Fanconi anemia was diagnosed, complementation group and mutation analyses. The mitomycin C sensitivity test in fibroblasts was validated on control Fanconi anemia and non-Fanconi anemia samples, including other chromosomal instability disorders. Results When this diagnosis strategy was applied to the cohort of bone marrow failure patients, 7 Fanconi anemia patients were found (3 children and 4 adults). Classical chromosomal breakage tests in blood detected 4, but analyses on fibroblasts were necessary to diagnose 3 more patients with hematopoietic somatic mosaicism. Importantly, Fanconi anemia was excluded in all the other patients who were fully evaluated. Conclusions In this large cohort of patients with bone marrow failure our results confirmed that when any clinical/biological suspicion of Fanconi anemia remains after chromosome breakage tests in blood, based on physical examination, history or inconclusive results, then further evaluation including fibroblast analysis should be made. For that purpose, the flow-based mitomycin C sensitivity test here described proved to be a reliable alternative method to evaluate Fanconi anemia phenotype in fibroblasts. This global strategy allowed early and accurate confirmation or rejection of Fanconi anemia diagnosis with immediate clinical impact for those who underwent hematopoietic stem cell transplant. PMID:19278965
NASA Technical Reports Server (NTRS)
1983-01-01
The baseline mission model used to develop the space station mission-related requirements is described as well as the 90 civil missions that were evaluated, (including the 62 missions that formed the baseline model). Mission-related requirements for the space station baseline are defined and related to space station architectural development. Mission-related sensitivity analyses are discussed.
Eze, Ikenna C; Hemkens, Lars G; Bucher, Heiner C; Hoffmann, Barbara; Schindler, Christian; Künzli, Nino; Schikowski, Tamara; Probst-Hensch, Nicole M
2015-05-01
Air pollution is hypothesized to be a risk factor for diabetes. Epidemiological evidence is inconsistent and has not been systematically evaluated. We systematically reviewed epidemiological evidence on the association between air pollution and diabetes, and synthesized results of studies on type 2 diabetes mellitus (T2DM). We systematically searched electronic literature databases (last search, 29 April 2014) for studies reporting the association between air pollution (particle concentration or traffic exposure) and diabetes (type 1, type 2, or gestational). We systematically evaluated risk of bias and role of potential confounders in all studies. We synthesized reported associations with T2DM in meta-analyses using random-effects models and conducted various sensitivity analyses. We included 13 studies (8 on T2DM, 2 on type 1, 3 on gestational diabetes), all conducted in Europe or North America. Five studies were longitudinal, 5 cross-sectional, 2 case-control, and 1 ecologic. Risk of bias, air pollution assessment, and confounder control varied across studies. Dose-response effects were not reported. Meta-analyses of 3 studies on PM2.5 (particulate matter ≤ 2.5 μm in diameter) and 4 studies on NO2 (nitrogen dioxide) showed increased risk of T2DM by 8-10% per 10-μg/m3 increase in exposure [PM2.5: 1.10 (95% CI: 1.02, 1.18); NO2: 1.08 (95% CI: 1.00, 1.17)]. Associations were stronger in females. Sensitivity analyses showed similar results. Existing evidence indicates a positive association of air pollution and T2DM risk, albeit there is high risk of bias. High-quality studies assessing dose-response effects are needed. Research should be expanded to developing countries where outdoor and indoor air pollution are high.
Lang, Pauline M; Jacinto, Rogério C; Dal Pizzol, Tatiane S; Ferreira, Maria Beatriz C; Montagner, Francisco
2016-11-01
Infected root canal or acute apical abscess exudates can harbour several species, including Fusobacterium, Porphyromonas, Prevotella, Parvimonas, Streptococcus, Treponema, Olsenella and not-yet cultivable species. A systematic review and meta-analysis was performed to assess resistance rates to antimicrobial agents in clinical studies that isolated bacteria from acute endodontic infections. Electronic databases and the grey literature were searched up to May 2015. Clinical studies in humans evaluating the antimicrobial resistance of primary acute endodontic infection isolates were included. PRISMA guidelines were followed. A random-effect meta-analysis was employed. The outcome was described as the pooled resistance rates for each antimicrobial agent. Heterogeneity and sensitivity analyses were performed. Subgroup analyses were conducted based upon report or not of the use of antibiotics prior to sampling as an exclusion factor (subgroups A and B, respectively). Data from seven studies were extracted. Resistance rates for 15 different antimicrobial agents were evaluated (range, 3.5-40.0%). Lower resistance rates were observed for amoxicillin/clavulanic acid and amoxicillin; higher resistance rates were detected for tetracycline. Resistance rates varied according to previous use of an antimicrobial agent as demonstrated by the subgroup analyses. Heterogeneity was observed for the resistance profiles of penicillin G in subgroup A and for amoxicillin, clindamycin, metronidazole and tetracycline in subgroup B. Sensitivity analyses demonstrated that resistance rates changed for metronidazole, clindamycin, tetracycline and amoxicillin. These findings suggest that clinical isolates had low resistance to β-lactams. Further well-designed studies are needed to clarify whether the differences in susceptibility among the antimicrobial agents may influence clinical responses to treatment. Copyright © 2016 Elsevier B.V. and International Society of Chemotherapy. All rights reserved.
Yatsenko, Svetlana A.; Shaw, Chad A.; Ou, Zhishuo; Pursley, Amber N.; Patel, Ankita; Bi, Weimin; Cheung, Sau Wai; Lupski, James R.; Chinault, A. Craig; Beaudet, Arthur L.
2009-01-01
In array-comparative genomic hybridization (array-CGH) experiments, the measurement of DNA copy number of sex chromosomal regions depends on the sex of the patient and the reference DNAs used. We evaluated the ability of bacterial artificial chromosomes/P1-derived artificial and oligonucleotide array-CGH analyses to detect constitutional sex chromosome imbalances using sex-mismatched reference DNAs. Twenty-two samples with imbalances involving either the X or Y chromosome, including deletions, duplications, triplications, derivative or isodicentric chromosomes, and aneuploidy, were analyzed. Although concordant results were obtained for approximately one-half of the samples when using sex-mismatched and sex-matched reference DNAs, array-CGH analyses with sex-mismatched reference DNAs did not detect genomic imbalances that were detected using sex-matched reference DNAs in 6 of 22 patients. Small duplications and deletions of the X chromosome were most difficult to detect in female and male patients, respectively, when sex-mismatched reference DNAs were used. Sex-matched reference DNAs in array-CGH analyses provides optimal sensitivity and enables an automated statistical evaluation for the detection of sex chromosome imbalances when compared with an experimental design using sex-mismatched reference DNAs. Using sex-mismatched reference DNAs in array-CGH analyses may generate false-negative, false-positive, and ambiguous results for sex chromosome-specific probes, thus masking potential pathogenic genomic imbalances. Therefore, to optimize both detection of clinically relevant sex chromosome imbalances and ensure proper experimental performance, we suggest that alternative internal controls be developed and used instead of using sex-mismatched reference DNAs. PMID:19324990
Analysis of 238Pu and 56Fe Evaluated Data for Use in MYRRHA
NASA Astrophysics Data System (ADS)
Díez, C. J.; Cabellos, O.; Martínez, J. S.; Stankovskiy, A.; Van den Eynde, G.; Schillebeeckx, P.; Heyse, J.
2014-04-01
A sensitivity analysis on the multiplication factor, keff, to the cross section data has been carried out for the MYRRHA critical configuration in order to show the most relevant reactions. With these results, a further analysis on the 238Pu and 56Fe cross sections has been performed, comparing the evaluations provided in the JEFF-3.1.2 and ENDF/B-VII.1 libraries for these nuclides. Then, the effect in MYRRHA of the differences between evaluations are analysed, presenting the source of the differences. With these results, recommendations for the 56Fe and 238Pu evaluations are suggested. These calculations have been performed with SCALE6.1 and MCNPX-2.7e.
Banal, F; Dougados, M; Combescure, C; Gossec, L
2009-07-01
To evaluate the ability of the widely used ACR set of criteria (both list and tree format) to diagnose RA compared with expert opinion according to disease duration. A systematic literature review was conducted in PubMed and Embase databases. All articles reporting the prevalence of RA according to ACR criteria and expert opinion in cohorts of early (<1 year duration) or established (>1 year) arthritis were analysed to calculate the sensitivity and specificity of ACR 1987 criteria against the "gold standard" (expert opinion). A meta-analysis using a summary receiver operating characteristic (SROC) curve was performed and pooled sensitivity and specificity were calculated with confidence intervals. Of 138 publications initially identified, 19 were analysable (total 7438 patients, 3883 RA). In early arthritis, pooled sensitivity and specificity of the ACR set of criteria were 77% (68% to 84%) and 77% (68% to 84%) in the list format versus 80% (72% to 88%) and 33% (24% to 43%) in the tree format. In established arthritis, sensitivity and specificity were respectively 79% (71% to 85%) and 90% (84% to 94%) versus 80% (71% to 85%) and 93% (86% to 97%). The SROC meta-analysis confirmed the statistically significant differences, suggesting that diagnostic performances of ACR list criteria are better in established arthritis. The specificity of ACR 1987 criteria in early RA is low, and these criteria should not be used as diagnostic tools. Sensitivity and specificity in established RA are higher, which reflects their use as classification criteria gold standard.
Linear regression metamodeling as a tool to summarize and present simulation model results.
Jalal, Hawre; Dowd, Bryan; Sainfort, François; Kuntz, Karen M
2013-10-01
Modelers lack a tool to systematically and clearly present complex model results, including those from sensitivity analyses. The objective was to propose linear regression metamodeling as a tool to increase transparency of decision analytic models and better communicate their results. We used a simplified cancer cure model to demonstrate our approach. The model computed the lifetime cost and benefit of 3 treatment options for cancer patients. We simulated 10,000 cohorts in a probabilistic sensitivity analysis (PSA) and regressed the model outcomes on the standardized input parameter values in a set of regression analyses. We used the regression coefficients to describe measures of sensitivity analyses, including threshold and parameter sensitivity analyses. We also compared the results of the PSA to deterministic full-factorial and one-factor-at-a-time designs. The regression intercept represented the estimated base-case outcome, and the other coefficients described the relative parameter uncertainty in the model. We defined simple relationships that compute the average and incremental net benefit of each intervention. Metamodeling produced outputs similar to traditional deterministic 1-way or 2-way sensitivity analyses but was more reliable since it used all parameter values. Linear regression metamodeling is a simple, yet powerful, tool that can assist modelers in communicating model characteristics and sensitivity analyses.
Saarela, Ville; Falck, Aura; Airaksinen, P Juhani; Tuulonen, Anja
2012-03-01
To evaluate the factors affecting the sensitivity and specificity of the stereometric optic nerve head (ONH) parameters of the Heidelberg Retina Tomograph (HRT) to glaucomatous progression in stereoscopic ONH photographs. The factors affecting the sensitivity and specificity of the vertical cup : disc ratio, the cup : disc area ratio, the cup volume, the rim area and a linear discriminant function to progression were analysed. These parameters were the best indicators of progression in a retrospective study of 476 eyes. The reference standard for progression was the masked evaluation of stereoscopic ONH photographs. The factors having the most significant effect on the sensitivity and specificity of the stereometric ONH parameters were the reference height difference and the mean topography standard deviation (TSD), indicating image quality. Also, the change in the TSD and age showed consistent, but variably significant, influence on all parameters tested. The sensitivity and specificity improved when there was little change in the reference height, the image quality was good and stable, and the patients were younger. The sensitivity and specificity of the vertical cup : disc ratio was improved by a large disc area and high baseline cup : disc area ratio. The rim area showed a better sensitivity and specificity for progression with a small disc area and low baseline cup : disc area ratio. The factors affecting the sensitivity and specificity of the stereometric ONH parameters to glaucomatous progression in disc photographs are essentially the same as those affecting the measurement variability of the HRT. © 2010 The Authors. Acta Ophthalmologica © 2010 Acta Ophthalmologica Scandinavica Foundation.
Optimization and validation of CEDIA drugs of abuse immunoassay tests in serum on Hitachi 912.
Kirschbaum, Katrin M; Musshoff, Frank; Schmithausen, Ricarda; Stockhausen, Sarah; Madea, Burkhard
2011-10-10
Due to sensitive limits of detection of chromatographic methods and low limit values regarding the screening of drugs under the terms of impairment in safe driving (§ 24a StVG, Street Traffic Law in Germany), preliminary immunoassay (IA) tests should be able to detect also low concentrations of legal and illegal drugs in serum in forensic cases. False-negatives should be avoided, the rate of false-positive samples should be low due to cost and time. An optimization of IA cutoff values and a validation of the assay is required for each laboratory. In a retrospective study results for serum samples containing amphetamine, methylenedioxy derivatives, cannabinoids, benzodiazepines, cocaine (metabolites), methadone and opiates obtained with CEDIA drugs of abuse reagents on a Hitachi 912 autoanalyzer were compared with quantitative results of chromatographic methods (gas or liquid chromatography coupled with mass spectrometry (GC/MS or LC/MS)). Firstly sensitivity, specificity, positive and negative predictive values and overall misclassification rates were evaluated by contingency tables and compared to ROC-analyses and Youden-Indices. Secondly ideal cutoffs were statistically calculated on the basis of sensitivity and specificity as decisive statistical criteria with focus on a high sensitivity (low rates of false-negatives), i.e. using the Youden-Index. Immunoassay (IA) and confirmatory results were available for 3014 blood samples. Sensitivity was 90% or more for nearly all analytes: amphetamines (IA cutoff 9.5 ng/ml), methylenedioxy derivatives (IA cutoff 5.5 ng/ml), cannabinoids (IA cutoff 14.5 ng/ml), benzodiazepines (IA cutoff >0 ng/ml). Test of opiates showed a sensitivity of 86% for a IA cutoff value of >0 ng/ml. Values for specificity ranged between 33% (methadone, IA cutoff 10 ng/ml) and 90% (cocaine, IA cutoff 20 ng/ml). Lower cutoff values as recommended by ROC analyses were chosen for most tests to decrease the rate of false-negatives. Analyses enabled the definition of cutoff values with good values for sensitivity. Small rates of false-positives can be accepted in forensic cases. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Methods for Probabilistic Radiological Dose Assessment at a High-Level Radioactive Waste Repository.
NASA Astrophysics Data System (ADS)
Maheras, Steven James
Methods were developed to assess and evaluate the uncertainty in offsite and onsite radiological dose at a high-level radioactive waste repository to show reasonable assurance that compliance with applicable regulatory requirements will be achieved. Uncertainty in offsite dose was assessed by employing a stochastic precode in conjunction with Monte Carlo simulation using an offsite radiological dose assessment code. Uncertainty in onsite dose was assessed by employing a discrete-event simulation model of repository operations in conjunction with an occupational radiological dose assessment model. Complementary cumulative distribution functions of offsite and onsite dose were used to illustrate reasonable assurance. Offsite dose analyses were performed for iodine -129, cesium-137, strontium-90, and plutonium-239. Complementary cumulative distribution functions of offsite dose were constructed; offsite dose was lognormally distributed with a two order of magnitude range. However, plutonium-239 results were not lognormally distributed and exhibited less than one order of magnitude range. Onsite dose analyses were performed for the preliminary inspection, receiving and handling, and the underground areas of the repository. Complementary cumulative distribution functions of onsite dose were constructed and exhibited less than one order of magnitude range. A preliminary sensitivity analysis of the receiving and handling areas was conducted using a regression metamodel. Sensitivity coefficients and partial correlation coefficients were used as measures of sensitivity. Model output was most sensitive to parameters related to cask handling operations. Model output showed little sensitivity to parameters related to cask inspections.
Gili, Pablo; Flores-Rodríguez, Patricia; Yangüela, Julio; Orduña-Azcona, Javier; Martín-Ríos, María Dolores
2013-03-01
Evaluation of the efficacy of monochromatic photography of the ocular fundus in differentiating optic nerve head drusen (ONHD) and optic disc oedema (ODE). Sixty-six patients with ONHD, 31 patients with ODE and 70 healthy subjects were studied. Colour and monochromatic fundus photography with different filters (green, red and autofluorescence) were performed. The results were analysed blindly by two observers. The sensitivity, specificity and interobserver agreement (k) of each test were assessed. Colour photography offers 65.5 % sensitivity and 100 % specificity for the diagnosis of ONHD. Monochromatic photography improves sensitivity and specificity and provides similar results: green filter (71.20 % sensitivity, 96.70 % specificity), red filter (80.30 % sensitivity, 96.80 % specificity), and autofluorescence technique (87.8 % sensitivity, 100 % specificity). The interobserver agreement was good with all techniques used: autofluorescence (k = 0.957), green filter (k = 0.897), red filter (k = 0.818) and colour (k = 0.809). Monochromatic fundus photography permits ONHD and ODE to be differentiated, with good sensitivity and very high specificity. The best results were obtained with autofluorescence and red filter study.
Bowker, Julie C; Thomas, Katelyn K; Norman, Kelly E; Spencer, Sarah V
2011-05-01
Rejection sensitivity (RS) refers to the tendency to anxiously expect, readily perceive, and overreact to experiences of possible rejection. RS is a clear risk factor for psychological maladaptation during early adolescence. However, there is growing evidence of significant heterogeneity in the psychological correlates of RS. To investigate when RS poses the greatest psychological risk during early adolescence, this study examines mutual best friendship involvement (or lack thereof) and the best friends' RS as potential moderators of the associations between RS and psychological difficulties. Participants were 150 7th grade students (58 boys; M age = 13.05 years) who nominated their best friends, and reported on their RS, social anxiety, and self-esteem. Results from a series of hierarchical multiple regression analyses indicated that mutual best friendship involvement and best friends' RS were both significant moderators when fear of negative evaluation (a type of social anxiety) served as the dependent variable. The association between RS and fear of negative evaluation was stronger for adolescents without mutual best friends than adolescents with mutual best friends. In addition, the association between RS and fear of negative evaluation was the strongest for adolescents whose best friends were highly rejection sensitive (relative to adolescents whose best friends were moderately or low in RS). Findings highlight the importance of considering best friendships in studies of RS and strongly suggest that, although having mutual best friendships may be protective for rejection sensitive adolescents, having a rejection sensitive best friend may exacerbate difficulties. The significance of friends in the lives of rejection sensitive adolescents is discussed as well as possible applied implications of the findings and study limitations.
Liu, Richard T; Burke, Taylor A; Abramson, Lyn Y; Alloy, Lauren B
2017-11-04
Behavioral Approach System (BAS) sensitivity has been implicated in the development of a variety of different psychiatric disorders. Prominent among these in the empirical literature are bipolar spectrum disorders (BSDs). Given that adolescence represents a critical developmental stage of risk for the onset of BSDs, it is important to clarify the latent structure of BAS sensitivity in this period of development. A statistical approach especially well-suited for delineating the latent structure of BAS sensitivity is taxometric analysis, which is designed to evaluate whether the latent structure of a construct is taxonic (i.e., categorical) or dimensional (i.e., continuous) in nature. The current study applied three mathematically non-redundant taxometric procedures (i.e., MAMBAC, MAXEIG, and L-Mode) to a large community sample of adolescents (n = 12,494) who completed two separate measures of BAS sensitivity: the BIS/BAS Scales Carver and White (Journal of Personality and Social Psychology, 67, 319-333. 1994) and the Sensitivity to Reward and Sensitivity to Punishment Questionnaire (Torrubia et al. Personality and Individual Differences, 31, 837-862. 2001). Given the significant developmental changes in reward sensitivity that occur across adolescence, the current investigation aimed to provide a fine-grained evaluation of the data by performing taxometric analyses at an age-by-age level (14-19 years; n for each age ≥ 883). Results derived from taxometric procedures, across all ages tested, were highly consistent, providing strong evidence that BAS sensitivity is best conceptualized as dimensional in nature. Thus, the findings suggest that BAS-related vulnerability to BSDs exists along a continuum of severity, with no natural cut-point qualitatively differentiating high- and low-risk adolescents. Clinical and research implications for the assessment of BSD-related vulnerability are discussed.
Component fears of claustrophobia associated with mock magnetic resonance imaging.
McGlynn, F Dudley; Smitherman, Todd A; Hammel, Jacinda C; Lazarte, Alejandro A
2007-01-01
A conceptualization of claustrophobia [Rachman, S., & Taylor, S. (1993). Analyses of claustrophobia. Journal of Anxiety Disorders, 7, 281-291] was evaluated in the context of magnetic resonance imaging. One hundred eleven students responded to questionnaires that quantified fear of suffocation, fear of restriction, and sensitivity to anxiety symptoms. Sixty-four of them were then exposed to a mock magnetic resonance imaging assessment; maximum subjective fear during the mock assessment was self-reported, behavioral reactions to the mock assessment were characterized, and heart rates before and during the assessment were recorded. Scores for fear of suffocation, fear of restriction, and anxiety sensitivity were used to predict subjective, behavioral, and cardiac fear. Subjective fear during the mock assessment was predicted by fears of suffocation and public anxiousness. Behavioral fear (escape/avoidance) was predicted by fears of restriction and suffocation, and sensitivity to symptoms related to suffocation. Cardiac fear was predicted by fear of public anxiousness. The criterion variance predicted was impressive, clearly sufficient to legitimize both the research preparation and the conceptualization of claustrophobia that was evaluated.
Ou, Huang-Tz; Lee, Tsung-Ying; Chen, Yee-Chun; Charbonneau, Claudie
2017-07-10
Cost-effectiveness studies of echinocandins for the treatment of invasive candidiasis, including candidemia, are rare in Asia. No study has determined whether echinocandins are cost-effective for both Candida albicans and non-albicans Candida species. There have been no economic evaluations that compare non-echinocandins with the three available echinocandins. This study was aimed to assess the cost-effectiveness of individual echinocandins, namely caspofungin, micafungin, and anidulafungin, versus non-echinocandins for C. albicans and non-albicans Candida species, respectively. A decision tree model was constructed to assess the cost-effectiveness of echinocandins and non-echinocandins for invasive candidiasis. The probability of treatment success, mortality rate, and adverse drug events were extracted from published clinical trials. The cost variables (i.e., drug acquisition) were based on Taiwan's healthcare system from the perspective of a medical payer. One-way sensitivity analyses and probability sensitivity analyses were conducted. For treating invasive candidiasis (all species), as compared to fluconazole, micafungin and caspofungin are dominated (less effective, more expensive), whereas anidulafungin is cost-effective (more effective, more expensive), costing US$3666.09 for each life-year gained, which was below the implicit threshold of the incremental cost-effectiveness ratio in Taiwan. For C. albicans, echinocandins are cost-saving as compared to non-echinocandins. For non-albicans Candida species, echinocandins are cost-effective as compared to non-echinocandins, costing US$652 for each life-year gained. The results were robust over a wide range of sensitivity analyses and were most sensitive to the clinical efficacy of antifungal treatment. Echinocandins, especially anidulafungin, appear to be cost-effective for invasive candidiasis caused by C. albicans and non-albicans Candida species in Taiwan.
Subgroup Economic Evaluation of Radiotherapy for Breast Cancer After Mastectomy.
Wan, Xiaomin; Peng, Liubao; Ma, Jinan; Chen, Gannong; Li, Yuanjian
2015-11-01
A recent meta-analysis by the Early Breast Cancer Trialists' Collaborative Group found significant improvements achieved by postmastectomy radiotherapy (PMRT) for patients with breast cancer with 1 to 3 positive nodes (pN1-3). It is unclear whether PMRT is cost-effective for subgroups of patients with positive nodes. To determine the cost-effectiveness of PMRT for subgroups of patients with breast cancer with positive nodes. A semi-Markov model was constructed to estimate the expected lifetime costs, life expectancy, and quality-adjusted life-years for patients receiving or not receiving radiation therapy. Clinical and health utilities data were from meta-analyses by the Early Breast Cancer Trialists' Collaborative Group or randomized clinical trials. Costs were estimated from the perspective of the Chinese society. One-way and probabilistic sensitivity analyses were performed. The incremental cost-effective ratio was estimated as $7984, $4043, $3572, and $19,021 per quality-adjusted life-year for patients with positive nodes (pN+), patients with pN1-3, patients with pN1-3 who received systemic therapy, and patients with >4 positive nodes (pN4+), respectively. According to World Health Organization recommendations, these incremental cost-effective ratios were judged as cost-effective. However, the results of one-way sensitivity analyses suggested that the results were highly sensitive to the relative effectiveness of PMRT (rate ratio). We determined that the results were highly sensitive to the rate ratio. However, the addition of PMRT for patients with pN1-3 in China has a reasonable chance to be cost-effective and may be judged as an efficient deployment of limited health resource, and the risk and uncertainty of PMRT are relatively greater for patients with pN4+. Copyright © 2015 Elsevier HS Journals, Inc. All rights reserved.
Dalziel, Kim; Round, Ali; Garside, Ruth; Stein, Ken
2005-01-01
To evaluate the cost utility of imatinib compared with interferon (IFN)-alpha or hydroxycarbamide (hydroxyurea) for first-line treatment of chronic myeloid leukaemia. A cost-utility (Markov) model within the setting of the UK NHS and viewed from a health system perspective was adopted. Transition probabilities and relative risks were estimated from published literature. Costs of drug treatment, outpatient care, bone marrow biopsies, radiography, blood transfusions and inpatient care were obtained from the British National Formulary and local hospital databases. Costs (pound, year 2001-03 values) were discounted at 6%. Quality-of-life (QOL) data were obtained from the published literature and discounted at 1.5%. The main outcome measure was cost per QALY gained. Extensive one-way sensitivity analyses were performed along with probabilistic (stochastic) analysis. The incremental cost-effectiveness ratio (ICER) of imatinib, compared with IFNalpha, was pound26,180 per QALY gained (one-way sensitivity analyses ranged from pound19,449 to pound51,870) and compared with hydroxycarbamide was pound86,934 per QALY (one-way sensitivity analyses ranged from pound69,701 to pound147,095) [ pound1=$US1.691=euro1.535 as at 31 December 2002].Based on the probabilistic sensitivity analysis, 50% of the ICERs for imatinib, compared with IFNalpha, fell below a threshold of approximately pound31,000 per QALY gained. Fifty percent of ICERs for imatinib, compared with hydroxycarbamide, fell below approximately pound95,000 per QALY gained. This model suggests, given its underlying data and assumptions, that imatinib may be moderately cost effective when compared with IFNalpha but considerably less cost effective when compared with hydroxycarbamide. There are, however, many uncertainties due to the lack of long-term data.
Boundary Layer Depth In Coastal Regions
NASA Astrophysics Data System (ADS)
Porson, A.; Schayes, G.
The results of earlier studies performed about sea breezes simulations have shown that this is a relevant feature of the Planetary Boundary Layer that still requires effort to be diagnosed properly by atmospheric models. Based on the observations made during the ESCOMPTE campaign, over the Mediterranean Sea, different CBL and SBL height estimation processes have been tested with a meso-scale model, TVM. The aim was to compare the critical points of the BL height determination computed using turbulent kinetic energy profile with some other standard evaluations. Moreover, these results have been analysed with different mixing length formulation. The sensitivity of formulation is also analysed with a simple coastal configuration.
Evaluation of NOx Emissions and Modeling
NASA Astrophysics Data System (ADS)
Henderson, B. H.; Simon, H. A.; Timin, B.; Dolwick, P. D.; Owen, R. C.; Eyth, A.; Foley, K.; Toro, C.; Baker, K. R.
2017-12-01
Studies focusing on ambient measurements of NOy have concluded that NOx emissions are overestimated and some have attributed the error to the onroad mobile sector. We investigate this conclusion to identify the cause of observed bias. First, we compare DISCOVER-AQ Baltimore ambient measurements to fine-scale modeling with NOy tagged by sector. Sector-based relationships with bias are present, but these are sensitive to simulated vertical mixing. This is evident both in sensitivity to mixing parameterization and the seasonal patterns of bias. We also evaluate observation-based indicators, like CO:NOy ratios, that are commonly used to diagnose emissions inventories. Second, we examine the sensitivity of predicted NOx and NOy to temporal allocation of emissions. We investigate alternative temporal allocations for EGUs without CEMS, on-road mobile, and several non-road categories. These results show some location-specific sensitivity and will lead to some improved temporal allocations. Third, near-road studies have inherently fewer confounding variables, and have been examined for more direct evaluation of emissions and dispersion models. From 2008-2011, the EPA and FHWA conducted near-road studies in Las Vegas and Detroit. These measurements are used to more directly evaluate the emissions and dispersion using site-specific traffic data. In addition, the site-specific emissions are being compared to the emissions used in larger-scale photochemical modeling to identify key discrepancies. These efforts are part of a larger coordinated effort by EPA scientist to ensure the highest quality in emissions and model processes. We look forward to sharing the state of these analyses and expected updates.
Are Study and Journal Characteristics Reliable Indicators of "Truth" in Imaging Research?
Frank, Robert A; McInnes, Matthew D F; Levine, Deborah; Kressel, Herbert Y; Jesurum, Julia S; Petrcich, William; McGrath, Trevor A; Bossuyt, Patrick M
2018-04-01
Purpose To evaluate whether journal-level variables (impact factor, cited half-life, and Standards for Reporting of Diagnostic Accuracy Studies [STARD] endorsement) and study-level variables (citation rate, timing of publication, and order of publication) are associated with the distance between primary study results and summary estimates from meta-analyses. Materials and Methods MEDLINE was searched for meta-analyses of imaging diagnostic accuracy studies, published from January 2005 to April 2016. Data on journal-level and primary-study variables were extracted for each meta-analysis. Primary studies were dichotomized by variable as first versus subsequent publication, publication before versus after STARD introduction, STARD endorsement, or by median split. The mean absolute deviation of primary study estimates from the corresponding summary estimates for sensitivity and specificity was compared between groups. Means and confidence intervals were obtained by using bootstrap resampling; P values were calculated by using a t test. Results Ninety-eight meta-analyses summarizing 1458 primary studies met the inclusion criteria. There was substantial variability, but no significant differences, in deviations from the summary estimate between paired groups (P > .0041 in all comparisons). The largest difference found was in mean deviation for sensitivity, which was observed for publication timing, where studies published first on a topic demonstrated a mean deviation that was 2.5 percentage points smaller than subsequently published studies (P = .005). For journal-level factors, the greatest difference found (1.8 percentage points; P = .088) was in mean deviation for sensitivity in journals with impact factors above the median compared with those below the median. Conclusion Journal- and study-level variables considered important when evaluating diagnostic accuracy information to guide clinical decisions are not systematically associated with distance from the truth; critical appraisal of individual articles is recommended. © RSNA, 2017 Online supplemental material is available for this article.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mays, S.E.; Poloski, J.P.; Sullivan, W.H.
1982-07-01
A probabilistic risk assessment (PRA) was made of the Browns Ferry, Unit 1, nuclear plant as part of the Nuclear Regulatory Commission's Interim Reliability Evaluation Program (IREP). Specific goals of the study were to identify the dominant contributors to core melt, develop a foundation for more extensive use of PRA methods, expand the cadre of experienced PRA practitioners, and apply procedures for extension of IREP analyses to other domestic light water reactors. Event tree and fault tree analyses were used to estimate the frequency of accident sequences initiated by transients and loss of coolant accidents. External events such as floods,more » fires, earthquakes, and sabotage were beyond the scope of this study and were, therefore, excluded. From these sequences, the dominant contributors to probable core melt frequency were chosen. Uncertainty and sensitivity analyses were performed on these sequences to better understand the limitations associated with the estimated sequence frequencies. Dominant sequences were grouped according to common containment failure modes and corresponding release categories on the basis of comparison with analyses of similar designs rather than on the basis of detailed plant-specific calculations.« less
A cost-benefit analysis on the specialization in departments of obstetrics and gynecology in Japan.
Shen, Junyi; Fukui, On; Hashimoto, Hiroyuki; Nakashima, Takako; Kimura, Tadashi; Morishige, Kenichiro; Saijo, Tatsuyoshi
2012-03-27
In April 2008, the specialization in departments of obstetrics and gynecology was conducted in Sennan area of Osaka prefecture in Japan, which aims at solving the problems of regional provision of obstetrical service. Under this specialization, the departments of obstetrics and gynecology in two city hospitals were combined as one medical center, whilst one hospital is in charge of the department of gynecology and the other one operates the department of obstetrics. In this paper, we implement a cost-benefit analysis to evaluate the validity of this specialization. The benefit-cost ratio is estimated at 1.367 under a basic scenario, indicating that the specialization can generate a net benefit. In addition, with a consideration of different kinds of uncertainty in the future, a number of sensitivity analyses are conducted. The results of these sensitivity analyses suggest that the specialization is valid in the sense that all the estimated benefit-cost ratios are above 1.0 in any case.
E, Meng; Yu, Sufang; Dou, Jianrui; Jin, Wu; Cai, Xiang; Mao, Yiyang; Zhu, Daojian; Yang, Rumei
2016-08-01
The purpose of this study is to examine the association between alcohol consumption and amyotrophic lateral sclerosis. Published literature on the association between alcohol consumption and amyotrophic lateral sclerosis was retrieved from the PubMed and Embase databases. Two authors independently extracted the data. The quality of the identified studies was evaluated according to the Newcastle-Ottawa scale. Subgroup and sensitivity analyses were performed and publication bias was assessed. Five articles, including one cohort study and seven case-control studies, and a total of 431,943 participants, were identified. The odds ratio for the association between alcohol consumption and amyotrophic lateral sclerosis was 0.57 (95 % confidence interval 0.51-0.64). Subgroup and sensitivity analyses confirmed the result. Evidence for publication bias was detected. Alcohol consumption reduced the risk of developing amyotrophic lateral sclerosis compared with non-drinking. Alcohol, therefore, has a potentially neuroprotective effect on the development of amyotrophic lateral sclerosis.
The Cost of Penicillin Allergy Evaluation.
Blumenthal, Kimberly G; Li, Yu; Banerji, Aleena; Yun, Brian J; Long, Aidan A; Walensky, Rochelle P
2017-09-22
Unverified penicillin allergy leads to adverse downstream clinical and economic sequelae. Penicillin allergy evaluation can be used to identify true, IgE-mediated allergy. To estimate the cost of penicillin allergy evaluation using time-driven activity-based costing (TDABC). We implemented TDABC throughout the care pathway for 30 outpatients presenting for penicillin allergy evaluation. The base-case evaluation included penicillin skin testing and a 1-step amoxicillin drug challenge, performed by an allergist. We varied assumptions about the provider type, clinical setting, procedure type, and personnel timing. The base-case penicillin allergy evaluation costs $220 in 2016 US dollars: $98 for personnel, $119 for consumables, and $3 for space. In sensitivity analyses, lower cost estimates were achieved when only a drug challenge was performed (ie, no skin test, $84) and a nurse practitioner provider was used ($170). Adjusting for the probability of anaphylaxis did not result in a changed estimate ($220); although other analyses led to modest changes in the TDABC estimate ($214-$246), higher estimates were identified with changing to a low-demand practice setting ($268), a 50% increase in personnel times ($269), and including clinician documentation time ($288). In a least/most costly scenario analyses, the lowest TDABC estimate was $40 and the highest was $537. Using TDABC, penicillin allergy evaluation costs $220; even with varied assumptions adjusting for operational challenges, clinical setting, and expanded testing, penicillin allergy evaluation still costs only about $540. This modest investment may be offset for patients treated with costly alternative antibiotics that also may result in adverse consequences. Copyright © 2017 American Academy of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
A Systematic Review of the Cost-Effectiveness of Biologics for Ulcerative Colitis.
Stawowczyk, Ewa; Kawalec, Paweł
2018-04-01
Ulcerative colitis (UC) is a chronic autoimmune inflammation of the colon. The condition significantly decreases quality of life and generates a substantial economic burden for healthcare payers, patients and the society in which they live. Some patients require chronic pharmacotherapy, and access to novel biologic drugs might be crucial for long-term remission. The analyses of cost-effectiveness for biologic drugs are necessary to assess their efficiency and provide the best available drugs to patients. Our aim was to collect and assess the quality of economic analyses carried out for biologic agents used in the treatment of UC, as well as to summarize evidence on the drivers of cost-effectiveness and evaluate the transferability and generalizability of conclusions. A systematic database review was conducted using MEDLINE (via PubMed), EMBASE, Cost-Effectiveness Analysis Registry and CRD0. Both authors independently reviewed the identified articles to determine their eligibility for final review. Hand searching of references in collected papers was also performed to find any relevant articles. The reporting quality of economic analyses included was evaluated by two reviewers using the International Society of Pharmacoeconomics and Outcomes Research (ISPOR) Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement checklist. We reviewed the sensitivity analyses in cost-effectiveness analyses to identify the variables that may have changed the conclusions of the study. Key drivers of cost-effectiveness were selected by identifying uncertain parameters that caused the highest change of the results of the analyses compared with base-case results. Of the 576 identified records, 87 were excluded as duplicates and 16 studies were included in the final review; evaluations for Canada, the UK and Poland were mostly performed. The majority of the evaluations revealed were performed for infliximab (approximately 75% of total volume); however, some assessments were also performed for adalimumab (50%) and golimumab (31%). Only three analyses were conducted for vedolizumab, whereas no relevant studies were found for etrolizumab and tofacitinib. The reporting quality of the included economic analyses was assessed as high, with an average score of 21 points per 24 maximum possible (range 14-23 points according to the ISPOR CHEERS statement checklist). In the case of most analyses, quality-adjusted life-years were used as a clinical outcome, and endpoints such as remission, response and mucosal healing were less common. The higher clinical effectiveness (based on response rates) of biological treatment over non-biological treatments was presented in revealed analyses. The incremental cost-utility ratios for biologics, compared with standard care, varied significantly between the studies and ranged from US$36,309 to US$456,979. The lowest value was obtained for infliximab and the highest for the treatment scheme including infliximab 5 mg/kg and infliximab 10 mg/kg + adalimumab. The change of utility weights and clinical parameters had the most significant influence on the results of the analysis; the variable related to surgery was the least sensitive. Limited data on the cost-effectiveness of UC therapy were identified. In the majority of studies, the lack of cost-effectiveness was revealed for biologics, which was associated with their high costs. Clinical outcomes are transferable to other countries and could be generalized; however, cost inputs are country-specific and therefore limit the transferability and generalizability of conclusions. The key drivers and variables that showed the greatest effect on the analysis results were utility weights and clinical parameters.
Gee Kee, E; Stockton, K; Kimble, R M; Cuttle, L; McPhail, S M
2017-06-01
Partial thickness burns of up to 10% total body surface area (TBSA) in children are common injuries primarily treated in the outpatient setting using expensive silver-containing dressings. However, economic evaluations in the paediatric burns population are lacking to assist healthcare providers when choosing which dressing to use. The aim of this study was to conduct a cost-effectiveness analysis of three silver dressings for partial thickness burns ≤10% TBSA in children aged 0-15 years using days to full wound re-epithelialization as the health outcome. This study was a trial based economic evaluation (incremental cost effectiveness) conducted from a healthcare provider perspective. Ninety-six children participated in the trial investigating Acticoat™, Acticoat™ with Mepitel™ or Mepilex Ag™. Costs directly related to the management of partial thickness burns ≤10% TBSA were collected during the trial from March 2013 to July 2014 and for a one year after re-epithelialization time horizon. Incremental cost effectiveness ratios were estimated and dominance probabilities calculated from bootstrap resampling trial data. Sensitivity analyses were conducted to examine the potential effect of accounting for infrequent, but high cost, skin grafting surgical procedures. Costs (dressing, labour, analgesics, scar management) were considerably lower in the Mepilex Ag™ group (median AUD$94.45) compared to the Acticoat™ (median $244.90) and Acticoat™ with Mepitel™ (median $196.66) interventions. There was a 99% and 97% probability that Mepilex Ag™ dominated (cheaper and more effective than) Acticoat™ and Acticoat™ with Mepitel™, respectively. This pattern of dominance was consistent across raw cost and effects, after a priori adjustments, and sensitivity analyses. There was an 82% probability that Acticoat™ with Mepitel dominated Acticoat™ in the primary analysis, although this probability was sensitive to the effect of skin graft procedures. This economic evaluation has demonstrated that Mepilex Ag™ was the dominant dressing choice over both Acticoat™ and Acticoat™ with Mepitel™ in this trial-based economic evaluation and is recommended for treatment of paediatric partial thickness burns ≤10% TBSA. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Cost-Effectiveness of Diagnostic Strategies for Suspected Scaphoid Fractures.
Yin, Zhong-Gang; Zhang, Jian-Bing; Gong, Ke-Tong
2015-08-01
The aim of this study was to assess the cost effectiveness of multiple competing diagnostic strategies for suspected scaphoid fractures. With published data, the authors created a decision-tree model simulating the diagnosis of suspected scaphoid fractures. Clinical outcomes, costs, and cost effectiveness of immediate computed tomography (CT), day 3 magnetic resonance imaging (MRI), day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, week 2 radiographs-MRI, week 2 radiographs-bone scan, and immediate MRI were evaluated. The primary clinical outcome was the detection of scaphoid fractures. The authors adopted societal perspective, including both the costs of healthcare and the cost of lost productivity. The incremental cost-effectiveness ratio (ICER), which expresses the incremental cost per incremental scaphoid fracture detected using a strategy, was calculated to compare these diagnostic strategies. Base case analysis, 1-way sensitivity analyses, and "worst case scenario" and "best case scenario" sensitivity analyses were performed. In the base case, the average cost per scaphoid fracture detected with immediate CT was $2553. The ICER of immediate MRI and day 3 MRI compared with immediate CT was $7483 and $32,000 per scaphoid fracture detected, respectively. The ICER of week 2 radiographs-MRI was around $170,000. Day 3 bone scan, week 2 radiographs alone, week 2 radiographs-CT, and week 2 radiographs-bone scan strategy were dominated or extendedly dominated by MRI strategies. The results were generally robust in multiple sensitivity analyses. Immediate CT and MRI were the most cost-effective strategies for diagnosing suspected scaphoid fractures. Economic and Decision Analyses Level II. See Instructions for Authors for a complete description of levels of evidence.
Li, Fuzhong; Harmer, Peter
2015-07-30
Exercise is effective in reducing falls in people with Parkinson disease. However, information on the cost effectiveness of this approach is lacking. We conducted a cost-effectiveness analysis of Tai Ji Quan for reducing falls among patients with mild-to-moderate Parkinson disease. We used data from a previous intervention trial to analyze resource use costs related to intervention delivery and number of falls observed during a 9-month study period. Cost effectiveness was estimated via incremental cost-effectiveness ratio (ICER) in which Tai Ji Quan was compared with 2 alternative interventions (Resistance training and Stretching) on the primary outcome of per fall prevented and the secondary outcome of per participant quality-adjusted life years (QALY) gained. We also conducted subgroup and sensitivity analyses. Tai Ji Quan was more effective than either Resistance training or Stretching; it had the lowest cost and was the most effective in improving primary and secondary outcomes. Compared with Stretching, Tai Ji Quan cost an average of $175 less for each additional fall prevented and produced a substantial improvement in QALY gained at a lower cost. Results from subgroup and sensitivity analyses showed no variation in cost-effectiveness estimates. However, sensitivity analyses demonstrated a much lower ICER ($27) when only intervention costs were considered. Tai Ji Quan represents a cost-effective strategy for optimizing spending to prevent falls and maximize health gains in people with Parkinson disease. While these results are promising, they warrant further validation.
A Circular Microstrip Antenna Sensor for Direction Sensitive Strain Evaluation †
Herbko, Michal
2018-01-01
In this paper, a circular microstrip antenna for stress evaluation is studied. This kind of microstrip sensor can be utilized in structural health monitoring systems. Reflection coefficient S11 is measured to determine deformation/strain value. The proposed sensor is adhesively connected to the studied sample. Applied strain causes a change in patch geometry and influences current distribution both in patch and ground plane. Changing the current flow in patch influences the value of resonant frequency. In this paper, two different resonant frequencies were analysed because in each case, different current distributions in patch were obtained. The sensor was designed for operating frequency of 2.5 GHz (at fundamental mode), which results in a diameter less than 55 mm. Obtained sensitivity was up to 1 MHz/100 MPa, resolution depends on utilized vector network analyser. Moreover, the directional characteristics for both resonant frequencies were defined, studied using numerical model and verified by measurements. Thus far, microstrip antennas have been used in deformation measurement only if the direction of external force was well known. Obtained directional characteristics of the sensor allow the determination of direction and value of stress by one sensor. This method of measurement can be an alternative to the rosette strain gauge. PMID:29361697
Sangchan, Apichat; Chaiyakunapruk, Nathorn; Supakankunti, Siripen; Pugkhem, Ake; Mairiang, Pisaln
2014-01-01
Endoscopic biliary drainage using metal and plastic stent in unresectable hilar cholangiocarcinoma (HCA) is widely used but little is known about their cost-effectiveness. This study evaluated the cost-utility of endoscopic metal and plastic stent drainage in unresectable complex, Bismuth type II-IV, HCA patients. Decision analytic model, Markov model, was used to evaluate cost and quality-adjusted life year (QALY) of endoscopic biliary drainage in unresectable HCA. Costs of treatment and utilities of each Markov state were retrieved from hospital charges and unresectable HCA patients from tertiary care hospital in Thailand, respectively. Transition probabilities were derived from international literature. Base case analyses and sensitivity analyses were performed. Under the base-case analysis, metal stent is more effective but more expensive than plastic stent. An incremental cost per additional QALY gained is 192,650 baht (US$ 6,318). From probabilistic sensitivity analysis, at the willingness to pay threshold of one and three times GDP per capita or 158,000 baht (US$ 5,182) and 474,000 baht (US$ 15,546), the probability of metal stent being cost-effective is 26.4% and 99.8%, respectively. Based on the WHO recommendation regarding the cost-effectiveness threshold criteria, endoscopic metal stent drainage is cost-effective compared to plastic stent in unresectable complex HCA.
Cost-effectiveness of breast cancer screening using mammography in Vietnamese women
2018-01-01
Background The incidence rate of breast cancer is increasing and has become the most common cancer in Vietnamese women while the survival rate is lower than that of developed countries. Early detection to improve breast cancer survival as well as reducing risk factors remains the cornerstone of breast cancer control according to the World Health Organization (WHO). This study aims to evaluate the costs and outcomes of introducing a mammography screening program for Vietnamese women aged 45–64 years, compared to the current situation of no screening. Methods Decision analytical modeling using Markov chain analysis was used to estimate costs and health outcomes over a lifetime horizon. Model inputs were derived from published literature and the results were reported as incremental cost-effectiveness ratios (ICERs) and/or incremental net monetary benefits (INMBs). One-way sensitivity analyses and probabilistic sensitivity analyses were performed to assess parameter uncertainty. Results The ICER per life year gained of the first round of mammography screening was US$3647.06 and US$4405.44 for women aged 50–54 years and 55–59 years, respectively. In probabilistic sensitivity analyses, mammography screening in the 50–54 age group and the 55–59 age group were cost-effective in 100% of cases at a threshold of three times the Vietnamese Gross Domestic Product (GDP) i.e., US$6332.70. However, less than 50% of the cases in the 60–64 age group and 0% of the cases in the 45–49 age group were cost effective at the WHO threshold. The ICERs were sensitive to the discount rate, mammography sensitivity, and transition probability from remission to distant recurrence in stage II for all age groups. Conclusion From the healthcare payer viewpoint, offering the first round of mammography screening to Vietnamese women aged 50–59 years should be considered, with the given threshold of three times the Vietnamese GDP per capita. PMID:29579131
Naujokaitis-Lewis, Ilona; Curtis, Janelle M R
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options.
Curtis, Janelle M.R.
2016-01-01
Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along with demographic parameters in sensitivity routines. GRIP 2.0 is an important decision-support tool that can be used to prioritize research, identify habitat-based thresholds and management intervention points to improve probability of species persistence, and evaluate trade-offs of alternative management options. PMID:27547529
Sensitivity Analysis of Multidisciplinary Rotorcraft Simulations
NASA Technical Reports Server (NTRS)
Wang, Li; Diskin, Boris; Biedron, Robert T.; Nielsen, Eric J.; Bauchau, Olivier A.
2017-01-01
A multidisciplinary sensitivity analysis of rotorcraft simulations involving tightly coupled high-fidelity computational fluid dynamics and comprehensive analysis solvers is presented and evaluated. An unstructured sensitivity-enabled Navier-Stokes solver, FUN3D, and a nonlinear flexible multibody dynamics solver, DYMORE, are coupled to predict the aerodynamic loads and structural responses of helicopter rotor blades. A discretely-consistent adjoint-based sensitivity analysis available in FUN3D provides sensitivities arising from unsteady turbulent flows and unstructured dynamic overset meshes, while a complex-variable approach is used to compute DYMORE structural sensitivities with respect to aerodynamic loads. The multidisciplinary sensitivity analysis is conducted through integrating the sensitivity components from each discipline of the coupled system. Numerical results verify accuracy of the FUN3D/DYMORE system by conducting simulations for a benchmark rotorcraft test model and comparing solutions with established analyses and experimental data. Complex-variable implementation of sensitivity analysis of DYMORE and the coupled FUN3D/DYMORE system is verified by comparing with real-valued analysis and sensitivities. Correctness of adjoint formulations for FUN3D/DYMORE interfaces is verified by comparing adjoint-based and complex-variable sensitivities. Finally, sensitivities of the lift and drag functions obtained by complex-variable FUN3D/DYMORE simulations are compared with sensitivities computed by the multidisciplinary sensitivity analysis, which couples adjoint-based flow and grid sensitivities of FUN3D and FUN3D/DYMORE interfaces with complex-variable sensitivities of DYMORE structural responses.
The sperm motility pattern in ecotoxicological tests. The CRYO-Ecotest as a case study.
Fabbrocini, Adele; D'Adamo, Raffaele; Del Prete, Francesco; Maurizio, Daniela; Specchiulli, Antonietta; Oliveira, Luis F J; Silvestri, Fausto; Sansone, Giovanni
2016-01-01
Changes in environmental stressors inevitably lead to an increasing need for innovative and more flexible monitoring tools. The aim of this work has been the characterization of the motility pattern of the cryopreserved sea bream semen after exposure to a dumpsite leachate sample, for the identification of the best representative parameters to be used as endpoints in an ecotoxicological bioassay. Sperm motility has been evaluated either by visual and by computer-assisted analysis; parameters concerning motility on activation and those describing it in the times after activation (duration parameters) have been assessed, discerning them in terms of sensitivity, reliability and methodology of assessment by means of multivariate analyses. The EC50 values of the evaluated endpoints ranged between 2.3 and 4.5ml/L, except for the total motile percentage (aTM, 7.0ml/L), which proved to be the less sensitive among all the tested parameters. According to the multivariate analyses, a difference in sensitivity among "activation" endpoints in respect of "duration" ones can be inferred; on the contrary, endpoints seem to be equally informative either describing total motile sperm or the rapid sub-population, as well as the assessment methodology seems to be not discriminating. In conclusion, the CRYO-Ecotest is a multi-endpoint bioassay that can be considered a promising innovative ecotoxicological tool, characterized by a high plasticity, as its endpoints can be easy tailored each time according to the different needs of the environmental quality assessment programs. Copyright © 2015 Elsevier Inc. All rights reserved.
Soller, Jeffrey A; Eftim, Sorina E; Nappier, Sharon P
2018-01-01
Understanding pathogen risks is a critically important consideration in the design of water treatment, particularly for potable reuse projects. As an extension to our published microbial risk assessment methodology to estimate infection risks associated with Direct Potable Reuse (DPR) treatment train unit process combinations, herein, we (1) provide an updated compilation of pathogen density data in raw wastewater and dose-response models; (2) conduct a series of sensitivity analyses to consider potential risk implications using updated data; (3) evaluate the risks associated with log credit allocations in the United States; and (4) identify reference pathogen reductions needed to consistently meet currently applied benchmark risk levels. Sensitivity analyses illustrated changes in cumulative annual risks estimates, the significance of which depends on the pathogen group driving the risk for a given treatment train. For example, updates to norovirus (NoV) raw wastewater values and use of a NoV dose-response approach, capturing the full range of uncertainty, increased risks associated with one of the treatment trains evaluated, but not the other. Additionally, compared to traditional log-credit allocation approaches, our results indicate that the risk methodology provides more nuanced information about how consistently public health benchmarks are achieved. Our results indicate that viruses need to be reduced by 14 logs or more to consistently achieve currently applied benchmark levels of protection associated with DPR. The refined methodology, updated model inputs, and log credit allocation comparisons will be useful to regulators considering DPR projects and design engineers as they consider which unit treatment processes should be employed for particular projects. Published by Elsevier Ltd.
Marzulli, F; Maguire, H C
1982-02-01
Several guinea-pig predictive test methods were evaluated by comparison of results with those obtained with human predictive tests, using ten compounds that have been used in cosmetics. The method involves the statistical analysis of the frequency with which guinea-pig tests agree with the findings of tests in humans. In addition, the frequencies of false positive and false negative predictive findings are considered and statistically analysed. The results clearly demonstrate the superiority of adjuvant tests (complete Freund's adjuvant) in determining skin sensitizers and the overall superiority of the guinea-pig maximization test in providing results similar to those obtained by human testing. A procedure is suggested for utilizing adjuvant and non-adjuvant test methods for characterizing compounds as of weak, moderate or strong sensitizing potential.
Ciatto, Stefano; Bonardi, Rita; Lombardi, Claudio; Zappa, Marco; Gervasi, Ginetta
2002-01-01
To evaluate the sensitivity at transrectal ultrasonography (TRUS) for prostate cancer. A consecutive series of 170 prostate cancers identified by matching local cancer registry and TRUS archives at the Centro per lo Studio e la Prevenzione Oncologica of Florence. TRUS sensitivity was determined as the ratio of TRUS positive to total prostate cancers occurring at different intervals from TRUS date. Univariate and multivariate analyses of sensitivity determinants were performed. Sensitivity at 6 months, 1, 2 and 3 years after the test was 94.1% (95% CI, 90-98), 89.8% (95% CI, 85-95), 80.4% (95% CI, 74-87) and 74.1% (95% CI, 68-81%), respectively. A higher sensitivity (statistically significant) of TRUS was observed only if digital rectal examination was suspicious, whereas no association to sensitivity was observed for age, prostate-specific antigen or prostate-specific antigen density. The study provided a reliable estimate of TRUS sensitivity, particularly reliable being checked against a cancer registry: observed sensitivity was high, at least of the same magnitude of other cancer screening tests. TRUS, which is known to allow for considerable diagnostic anticipation and is more specific than prostate-specific antigen, might still be considered for its contribution to a screening approach.
Simulation-based sensitivity analysis for non-ignorably missing data.
Yin, Peng; Shi, Jian Q
2017-01-01
Sensitivity analysis is popular in dealing with missing data problems particularly for non-ignorable missingness, where full-likelihood method cannot be adopted. It analyses how sensitively the conclusions (output) may depend on assumptions or parameters (input) about missing data, i.e. missing data mechanism. We call models with the problem of uncertainty sensitivity models. To make conventional sensitivity analysis more useful in practice we need to define some simple and interpretable statistical quantities to assess the sensitivity models and make evidence based analysis. We propose a novel approach in this paper on attempting to investigate the possibility of each missing data mechanism model assumption, by comparing the simulated datasets from various MNAR models with the observed data non-parametrically, using the K-nearest-neighbour distances. Some asymptotic theory has also been provided. A key step of this method is to plug in a plausibility evaluation system towards each sensitivity parameter, to select plausible values and reject unlikely values, instead of considering all proposed values of sensitivity parameters as in the conventional sensitivity analysis method. The method is generic and has been applied successfully to several specific models in this paper including meta-analysis model with publication bias, analysis of incomplete longitudinal data and mean estimation with non-ignorable missing data.
Rahman, Md Toufiq; Codlin, Andrew J; Rahman, Md Mahfuzur; Nahar, Ayenun; Reja, Mehdi; Islam, Tariqul; Qin, Zhi Zhen; Khan, Md Abdus Shakur; Banu, Sayera; Creswell, Jacob
2017-05-01
Computer-aided reading (CAR) of medical images is becoming increasingly common, but few studies exist for CAR in tuberculosis (TB). We designed a prospective study evaluating CAR for chest radiography (CXR) as a triage tool before Xpert MTB/RIF (Xpert).Consecutively enrolled adults in Dhaka, Bangladesh, with TB symptoms received CXR and Xpert. Each image was scored by CAR and graded by a radiologist. We compared CAR with the radiologist for sensitivity and specificity, area under the receiver operating characteristic curve (AUC), and calculated the potential Xpert tests saved.A total of 18 036 individuals were enrolled. TB prevalence by Xpert was 15%. The radiologist graded 49% of CXRs as abnormal, resulting in 91% sensitivity and 58% specificity. At a similar sensitivity, CAR had a lower specificity (41%), saving fewer (36%) Xpert tests. The AUC for CAR was 0.74 (95% CI 0.73-0.75). CAR performance declined with increasing age. The radiologist grading was superior across all sub-analyses.Using CAR can save Xpert tests, but the radiologist's specificity was superior. Differentiated CAR thresholds may be required for different populations. Access to, and costs of, human readers must be considered when deciding to use CAR software. More studies are needed to evaluate CAR using different screening approaches. Copyright ©ERS 2017.
Rahman, Md Toufiq; Codlin, Andrew J.; Rahman, Md Mahfuzur; Nahar, Ayenun; Reja, Mehdi; Islam, Tariqul; Qin, Zhi Zhen; Khan, Md Abdus Shakur; Banu, Sayera
2017-01-01
Computer-aided reading (CAR) of medical images is becoming increasingly common, but few studies exist for CAR in tuberculosis (TB). We designed a prospective study evaluating CAR for chest radiography (CXR) as a triage tool before Xpert MTB/RIF (Xpert). Consecutively enrolled adults in Dhaka, Bangladesh, with TB symptoms received CXR and Xpert. Each image was scored by CAR and graded by a radiologist. We compared CAR with the radiologist for sensitivity and specificity, area under the receiver operating characteristic curve (AUC), and calculated the potential Xpert tests saved. A total of 18 036 individuals were enrolled. TB prevalence by Xpert was 15%. The radiologist graded 49% of CXRs as abnormal, resulting in 91% sensitivity and 58% specificity. At a similar sensitivity, CAR had a lower specificity (41%), saving fewer (36%) Xpert tests. The AUC for CAR was 0.74 (95% CI 0.73–0.75). CAR performance declined with increasing age. The radiologist grading was superior across all sub-analyses. Using CAR can save Xpert tests, but the radiologist's specificity was superior. Differentiated CAR thresholds may be required for different populations. Access to, and costs of, human readers must be considered when deciding to use CAR software. More studies are needed to evaluate CAR using different screening approaches. PMID:28529202
Jia, Yongliang; Leung, Siu-wai; Lee, Ming-Yuen; Cui, Guozhen; Huang, Xiaohui; Pan, Fongha
2013-01-01
Objective. The randomized controlled trials (RCTs) on Guanxinning injection (GXN) in treating angina pectoris were published only in Chinese and have not been systematically reviewed. This study aims to provide a PRISMA-compliant and internationally accessible systematic review to evaluate the efficacy of GXN in treating angina pectoris. Methods. The RCTs were included according to prespecified eligibility criteria. Meta-analysis was performed to evaluate the symptomatic (SYMPTOMS) and electrocardiographic (ECG) improvements after treatment. Odds ratios (ORs) were used to measure effect sizes. Subgroup analysis, sensitivity analysis, and metaregression were conducted to evaluate the robustness of the results. Results. Sixty-five RCTs published between 2002 and 2012 with 6064 participants were included. Overall ORs comparing GXN with other drugs were 3.32 (95% CI: [2.72, 4.04]) in SYMPTOMS and 2.59 (95% CI: [2.14, 3.15]) in ECG. Subgroup analysis, sensitivity analysis, and metaregression found no statistically significant dependence of overall ORs upon specific study characteristics. Conclusion. This meta-analysis of eligible RCTs provides evidence that GXN is effective in treating angina pectoris. This evidence warrants further RCTs of higher quality, longer follow-up periods, larger sample sizes, and multicentres/multicountries for more extensive subgroup, sensitivity, and metaregression analyses. PMID:23634167
Cook, Karon F; Kallen, Michael A; Bombardier, Charles; Bamer, Alyssa M; Choi, Seung W; Kim, Jiseon; Salem, Rana; Amtmann, Dagmar
2017-01-01
To evaluate whether items of three measures of depressive symptoms function differently in persons with spinal cord injury (SCI) than in persons from a primary care sample. This study was a retrospective analysis of responses to the Patient Health Questionnaire depression scale, the Center for Epidemiological Studies Depression scale, and the National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS ® ) version 1.0 eight-item depression short form 8b (PROMIS-D). The presence of differential item function (DIF) was evaluated using ordinal logistic regression. No items of any of the three target measures were flagged for DIF based on standard criteria. In a follow-up sensitivity analyses, the criterion was changed to make the analysis more sensitive to potential DIF. Scores were corrected for DIF flagged under this criterion. Minimal differences were found between the original scores and those corrected for DIF under the sensitivity criterion. The three depression screening measures evaluated in this study did not perform differently in samples of individuals with SCI compared to general and community samples. Transdiagnostic symptoms did not appear to spuriously inflate depression severity estimates when administered to people with SCI.
Mondoulet, Lucie; Dioszeghy, Vincent; Busato, Florence; Plaquet, Camille; Dhelft, Véronique; Bethune, Kevin; Leclere, Laurence; Daviaud, Christian; Ligouis, Mélanie; Sampson, Hugh; Dupont, Christophe; Tost, Jörg
2018-05-19
Epicutaneous immunotherapy (EPIT) is a promising method for treating food allergies. In animal models, EPIT induces sustained unresponsiveness and prevents further sensitization mediated by Tregs. Here, we elucidate the mechanisms underlying the therapeutic effect of EPIT, by characterizing the kinetics of DNA methylation changes in sorted cells from spleen and blood and by evaluating its persistence and bystander effect compared to oral immunotherapy (OIT). BALB/c mice orally sensitized to peanut proteins (PPE) were treated by EPIT using a PPE-patch or by PPE-OIT. Another set of peanut-sensitized mice treated by EPIT or OIT were sacrificed following a protocol of sensitization to OVA. DNA methylation was analysed during immunotherapy and 8 weeks after the end of treatment in sorted cells from spleen and blood by pyrosequencing. Humoral and cellular responses were measured during and after immunotherapy. Analyses showed a significant hypermethylation of the Gata3 promoter detectable only in Th2 cells for EPIT from the 4 th week and a significant hypomethylation of the Foxp3 promoter in CD62L + Tregs, which was sustained only for EPIT. In addition, mice treated with EPIT were protected from subsequent sensitization and maintained the epigenetic signature characteristic for EPIT. Our study demonstrates that EPIT leads to a unique and stable epigenetic signature in specific T cell compartments with down regulation of Th2 key regulators and upregulation of Treg transcription factors, likely explaining the sustainability of protection and the observed bystander effect. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Evaluating the Discriminant Accuracy of a Grammatical Measure With Spanish-Speaking Children
Gutiérrez-Clellen, Vera F.; Restrepo, M. Adelaida; Simón-Cereijido, Gabriela
2012-01-01
Purpose The purpose of this study was to evaluate the discriminant accuracy of a grammatical measure for the identification of language impairment in Latino Spanish-speaking children. The authors hypothesized that if exposure to and use of English as a second language have an effect on the first language, bilingual children might exhibit lower rates of grammatical accuracy than their peers and be more likely to be misclassified. Method Eighty children with typical language development and 80 with language impairment were sampled from 4 different geographical regions and compared using linear discriminant function analysis. Results Results indicated fair-to-good sensitivity from 4;0 to 5;1 years, good sensitivity from 5;2 to 5;11 years, and poor sensitivity above age 6 years. The discriminant functions derived from the exploratory studies were able to predict group membership in confirmatory analyses with fair-to-excellent sensitivity up to age 6 years. Children who were bilingual did not show lower scores and were not more likely to be misclassified compared with their Spanish-only peers. Conclusions The measure seems to be appropriate for identifying language impairment in either Spanish-dominant or Spanish-only speakers between 4 and 6 years of age. However, for older children, supplemental testing is necessary. PMID:17197491
Bhatti, Aftab A; Chugtai, Aamir; Haslam, Philip; Talbot, David; Rix, David A; Soomro, Naeem A
2005-11-01
To prospectively compare the accuracy of multislice spiral computed tomographic angiography (CTA) and magnetic resonance angiography (MRA) in evaluating the renal vascular anatomy in potential living renal donors. Thirty-one donors underwent multislice spiral CTA and gadolinium-enhanced MRA. In addition to axial images, multiplanar reconstruction and maximum intensity projections were used to display the renal vascular anatomy. Twenty-four donors had a left laparoscopic donor nephrectomy (LDN), whereas seven had right open donor nephrectomy (ODN); LDN was only considered if the renal vascular anatomy was favourable on the left. CTA and MRA images were analysed by two radiologists independently. The radiological and surgical findings were correlated after the surgery. CTA showed 33 arteries and 32 veins (100% sensitivity) whereas MRA showed 32 arteries and 31 veins (97% sensitivity). CTA detected all five accessory renal arteries whereas MRA only detected one. CTA also identified all three accessory renal veins whereas MRA identified two. CTA had a sensitivity of 97% and 47% for left lumbar and left gonadal veins, whereas MRA had a sensitivity of 74% and 46%, respectively. Multislice spiral CTA with three-dimensional reconstruction was more accurate than MRA for both renal arterial and venous anatomy.
Heng, Siow-Chin; Slavin, Monica A; Al-Badriyeh, Daoud; Kirsa, Sue; Seymour, John F; Grigg, Andrew; Thursky, Karin; Bajel, Ashish; Nation, Roger L; Kong, David C M
2013-07-01
Fluconazole, posaconazole and voriconazole are used prophylactically in patients with acute myeloid leukaemia (AML). This study evaluated the clinical and economic outcomes of these agents when used in AML patients undergoing consolidation chemotherapy. A retrospective chart review (2003-10) of AML patients receiving consolidation chemotherapy was performed. Patients were followed through their first cycle of consolidation chemotherapy. Antifungal prescribing patterns, clinical outcomes and resource consumptions were recorded. A decision analytical model was developed to depict the downstream consequences of using each antifungal agent, with success defined as completion of the designated course of initial antifungal prophylaxis without developing invasive fungal disease (IFD). Cost-effectiveness and sensitivity analyses were performed. A total of 106 consecutive patients were analysed. Baseline characteristics and predisposing factors for IFD were comparable between groups. Three IFDs (one proven, one probable and one suspected) occurred, all in the posaconazole group. Patients receiving posaconazole had the highest rate of intolerance requiring drug cessation (13% versus 7% in each of the fluconazole and voriconazole groups). Fluconazole conferred overall savings per patient of 26% over posaconazole and 13% over voriconazole. Monte Carlo simulation demonstrated a mean cost saving with fluconazole of AU$8430 per patient (95% CI AU$5803-AU$11 054) versus posaconazole and AU$3681 per patient (95% CI AU$990-AU$6319) versus voriconazole. One-way sensitivity analyses confirmed the robustness of the model. This is the first study to show that, in the setting of consolidation therapy for AML, fluconazole is the most cost-effective approach to antifungal prophylaxis compared with posaconazole or voriconazole.
Bamrungsawad, Naruemon; Upakdee, Nilawan; Pratoomsoot, Chayanin; Sruamsiri, Rosarin; Dilokthornsakul, Piyameth; Dechanont, Supinya; Wu, David Bin-Chia; Dejthevaporn, Charungthai; Chaiyakunapruk, Nathorn
2016-07-01
Intravenous immunoglobulin (IVIG) has been recommended for steroid-resistant chronic inflammatory demyelinating polyradiculoneuropathy (CIDP). The treatment, however, is very costly to healthcare system, and there remains no evidence of its economic justifiability. This study aimed to conduct an economic evaluation (EE) of IVIG plus corticosteroids in steroid-resistant CIDP in Thailand. A Markov model was constructed to estimate the lifetime costs and outcomes for IVIG plus corticosteroids in comparison with immunosuppressants plus corticosteroids in steroid-resistant CIDP patients from a societal perspective. Efficacy and utility data were obtained from clinical literature, meta-analyses, medical record reviews, and patient interviews. Cost data were obtained from list prices, an electronic hospital database, published source, and patient interviews. All costs [in 2015 US dollars (US$)] and outcomes were discounted at 3 % annually. One-way and probabilistic sensitivity analyses were conducted. In the base-case, the incremental costs and quality-adjusted life years (QALYs) of IVIG plus corticosteroids versus immunosuppressants plus corticosteroids were US$2112.02 and 1.263 QALYs, respectively, resulting in an incremental cost-effectiveness ratio (ICER) of US$1672.71 per QALY gained. Sensitivity analyses revealed that the utility value of disabled patients was the greatest influence on ICER. At a societal willingness-to-pay threshold in Thailand of US$4672 per QALY gained, IVIG plus corticosteroids had a 92.1 % probability of being cost effective. At a threshold of US$4672 per QALY gained, IVIG plus corticosteroids is considered a cost-effective treatment for steroid-resistant CIDP patients in Thailand.
Design of a residential microgrid in Lagos del Cacique, Bucaramanga, Colombia
NASA Astrophysics Data System (ADS)
Bellon, D.; González Estrada, O. A.; Martínez, A.
2017-12-01
In this paper is presented a model that analyses the options to provide energy to an interconnected house in Lagos del Cacique, Bucaramanga, Colombia. Three power supplies were considered: photovoltaic, 1 kW wind turbine, and a 2.6kW gasoline generator, as well as a battery for energy storage. The variables considered for the sensitivity analysis correspond to the price of gasoline and the variation in loads. The simulation results suggest an optimal configuration of microgrids in generator-photovoltaic panel-battery. Sensitivity variables were specified in order to evaluate the effect of uncertainty. The simulation was done through the Homer software and the results of the combinations of sources are suggestions of the same.
Ropars, Pascale; Angers-Blondin, Sandra; Gagnon, Marianne; Myers-Smith, Isla H; Lévesque, Esther; Boudreau, Stéphane
2017-08-01
Shrub densification has been widely reported across the circumpolar arctic and subarctic biomes in recent years. Long-term analyses based on dendrochronological techniques applied to shrubs have linked this phenomenon to climate change. However, the multi-stemmed structure of shrubs makes them difficult to sample and therefore leads to non-uniform sampling protocols among shrub ecologists, who will favor either root collars or stems to conduct dendrochronological analyses. Through a comparative study of the use of root collars and stems of Betula glandulosa, a common North American shrub species, we evaluated the relative sensitivity of each plant part to climate variables and assessed whether this sensitivity is consistent across three different types of environments in northwestern Québec, Canada (terrace, hilltop and snowbed). We found that root collars had greater sensitivity to climate than stems and that these differences were maintained across the three types of environments. Growth at the root collar was best explained by spring precipitation and summer temperature, whereas stem growth showed weak and inconsistent responses to climate variables. Moreover, sensitivity to climate was not consistent among plant parts, as individuals having climate-sensitive root collars did not tend to have climate-sensitive stems. These differences in sensitivity of shrub parts to climate highlight the complexity of resource allocation in multi-stemmed plants. Whereas stem initiation and growth are driven by microenvironmental variables such as light availability and competition, root collars integrate the growth of all plant parts instead, rendering them less affected by mechanisms such as competition and more responsive to signals of global change. Although further investigations are required to determine the degree to which these findings are generalizable across the tundra biome, our results indicate that consistency and caution in the choice of plant parts are a key consideration for the success of future dendroclimatological studies on shrubs. © 2017 John Wiley & Sons Ltd.
A Laboratory-Based Evaluation of Four Rapid Point-of-Care Tests for Syphilis
Causer, Louise M.; Kaldor, John M.; Fairley, Christopher K.; Donovan, Basil; Karapanagiotidis, Theo; Leslie, David E.; Robertson, Peter W.; McNulty, Anna M.; Anderson, David; Wand, Handan; Conway, Damian P.; Denham, Ian; Ryan, Claire; Guy, Rebecca J.
2014-01-01
Background Syphilis point-of-care tests may reduce morbidity and ongoing transmission by increasing the proportion of people rapidly treated. Syphilis stage and co-infection with HIV may influence test performance. We evaluated four commercially available syphilis point-of-care devices in a head-to-head comparison using sera from laboratories in Australia. Methods Point-of-care tests were evaluated using sera stored at Sydney and Melbourne laboratories. Sensitivity and specificity were calculated by standard methods, comparing point-of-care results to treponemal immunoassay (IA) reference test results. Additional analyses by clinical syphilis stage, HIV status, and non-treponemal antibody titre were performed. Non-overlapping 95% confidence intervals (CI) were considered statistically significant differences in estimates. Results In total 1203 specimens were tested (736 IA-reactive, 467 IA-nonreactive). Point-of-care test sensitivities were: Determine 97.3%(95%CI:95.8–98.3), Onsite 92.5%(90.3–94.3), DPP 89.8%(87.3–91.9) and Bioline 87.8%(85.1–90.0). Specificities were: Determine 96.4%(94.1–97.8), Onsite 92.5%(90.3–94.3), DPP 98.3%(96.5–99.2), and Bioline 98.5%(96.8–99.3). Sensitivity of the Determine test was 100% for primary and 100% for secondary syphilis. The three other tests had reduced sensitivity among primary (80.4–90.2%) compared to secondary syphilis (94.3–98.6%). No significant differences in sensitivity were observed by HIV status. Test sensitivities were significantly higher among high-RPR titre (RPR≥8) (range: 94.6–99.5%) than RPR non-reactive infections (range: 76.3–92.9%). Conclusions The Determine test had the highest sensitivity overall. All tests were most sensitive among high-RPR titre infections. Point-of-care tests have a role in syphilis control programs however in developed countries with established laboratory infrastructures, the lower sensitivities of some tests observed in primary syphilis suggest these would need to be supplemented with additional tests among populations where syphilis incidence is high to avoid missing early syphilis cases. PMID:24618681
GEOS-3 phase B ground truth summary
NASA Technical Reports Server (NTRS)
Parsons, C. L.; Goodman, L. R.
1975-01-01
Ground truth data collected during the experiment systems calibration and evaluation phase of the Geodynamics experimental Ocean Satellite (GEOS-3) experiment are summarized. Both National Weather Service analyses and aircraft sensor data are included. The data are structured to facilitate the use of the various data products in calibrating the GEOS-3 radar altimeter and in assessing the altimeter's sensitivity to geophysical phenomena. Brief statements are made concerning the quality and completeness of the included data.
Wu, Yiping; Yu, Wenfang; Yang, Benhong; Li, Pan
2018-05-15
The use of different food additives and their active metabolites has been found to cause serious problems to human health. Thus, considering the potential effects on human health, developing a sensitive and credible analytical method for different foods is important. Herein, the application of solvent-driven self-assembled Au nanoparticles (Au NPs) for the rapid and sensitive detection of food additives in different commercial products is reported. The assembled substrates are highly sensitive and exhibit excellent uniformity and reproducibility because of uniformly distributed and high-density hot spots. The sensitive analyses of ciprofloxacin (CF), diethylhexyl phthalate (DEHP), tartrazine and azodicarbonamide at the 0.1 ppm level using this surface-enhanced Raman spectroscopy (SERS) substrate are given, and the results show that Au NP arrays can serve as efficient SERS substrates for the detection of food additives. More importantly, SERS spectra of several commercial liquors and sweet drinks are obtained to evaluate the addition of illegal additives. This SERS active platform can be used as an effective strategy in the detection of prohibited additives in food.
Evaluation of risk factors for perforated peptic ulcer.
Yamamoto, Kazuki; Takahashi, Osamu; Arioka, Hiroko; Kobayashi, Daiki
2018-02-15
The aim of this study was to evaluate the prediction factors for perforated peptic ulcer (PPU). At St. Luke's International Hospital in Tokyo, Japan, a case control study was performed between August 2004 and March 2016. All patients diagnosed with PPU were included. As control subjects, patients with age, sex and date of CT scan corresponding to those of the PPU subjects were included in the study at a proportion of 2 controls for every PPU subject. All data such as past medical histories, physical findings, and laboratory data were collected through chart reviews. Univariate analyses and multivariate analyses with logistic regression were conducted, and receiver operating characteristic curves (ROCs) were calculated to show validity. Sensitivity analyses were performed to confirm results using a stepwise method and conditional logistic regression. A total of 408 patients were included in this study; 136 were a group of patients with PPU, and 272 were a control group. Univariate analysis showed statistical significance in many categories. Four different models of multivariate analyses were conducted, and significant differences were found for muscular defense and a history of peptic ulcer disease (PUD) in all models. The conditional forced-entry analysis of muscular defense showed an odds ratio (OR) of 23.8 (95% confidence interval [CI]: 5.70-100.0), and the analysis of PUD history showed an OR of 6.40 (95% CI: 1.13-36.2). The sensitivity analysis showed consistent results, with an OR of 23.8-366.2 for muscular defense and an OR of 3.67-7.81 for PUD history. The area under the curve (AUC) of all models was high enough to confirm the results. However, anticoagulants, known risk factors for PUD, did not increase the risk for PPU in our study. The conditional forced-entry analysis of anticoagulant use showed an OR of 0.85 (95% CI: 0.03-22.3). The evaluation of prediction factors and development of a prediction rule for PPU may help our decision making in performing a CT scan for patients with acute abdominal pain.
Validation of a portable nitric oxide analyzer for screening in primary ciliary dyskinesias.
Harris, Amanda; Bhullar, Esther; Gove, Kerry; Joslin, Rhiannon; Pelling, Jennifer; Evans, Hazel J; Walker, Woolf T; Lucas, Jane S
2014-02-10
Nasal nitric oxide (nNO) levels are very low in primary ciliary dyskinesia (PCD) and it is used as a screening test. We assessed the reliability and usability of a hand-held analyser in comparison to a stationary nitric oxide (NO) analyser in 50 participants (15 healthy, 13 PCD, 22 other respiratory diseases; age 6-79 years). Nasal NO was measured using a stationary NO analyser during a breath-holding maneuver, and using a hand-held analyser during tidal breathing, sampling at 2 ml/sec or 5 ml/sec. The three methods were compared for their specificity and sensitivity as a screen for PCD, their success rate in different age groups, within subject repeatability and acceptability. Correlation between methods was assessed. Valid nNO measurements were obtained in 94% of participants using the stationary analyser, 96% using the hand-held analyser at 5 ml/sec and 76% at 2 ml/sec. The hand-held device at 5 ml/sec had excellent sensitivity and specificity as a screening test for PCD during tidal breathing (cut-off of 30 nL/min,100% sensitivity, >95% specificity). The cut-off using the stationary analyser during breath-hold was 38 nL/min (100% sensitivity, 95% specificity). The stationary and hand-held analyser (5 ml/sec) showed reasonable within-subject repeatability(% coefficient of variation = 15). The hand-held NO analyser provides a promising screening tool for PCD.
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-01-01
Background It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. Results This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. Conclusion SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes. PMID:18706080
SBML-SAT: a systems biology markup language (SBML) based sensitivity analysis tool.
Zi, Zhike; Zheng, Yanan; Rundell, Ann E; Klipp, Edda
2008-08-15
It has long been recognized that sensitivity analysis plays a key role in modeling and analyzing cellular and biochemical processes. Systems biology markup language (SBML) has become a well-known platform for coding and sharing mathematical models of such processes. However, current SBML compatible software tools are limited in their ability to perform global sensitivity analyses of these models. This work introduces a freely downloadable, software package, SBML-SAT, which implements algorithms for simulation, steady state analysis, robustness analysis and local and global sensitivity analysis for SBML models. This software tool extends current capabilities through its execution of global sensitivity analyses using multi-parametric sensitivity analysis, partial rank correlation coefficient, SOBOL's method, and weighted average of local sensitivity analyses in addition to its ability to handle systems with discontinuous events and intuitive graphical user interface. SBML-SAT provides the community of systems biologists a new tool for the analysis of their SBML models of biochemical and cellular processes.
Further evaluation of leisure items in the attention condition of functional analyses.
Roscoe, Eileen M; Carreau, Abbey; MacDonald, Jackie; Pence, Sacha T
2008-01-01
Research suggests that including leisure items in the attention condition of a functional analysis may produce engagement that masks sensitivity to attention. In this study, 4 individuals' initial functional analyses indicated that behavior was maintained by nonsocial variables (n = 3) or by attention (n = 1). A preference assessment was used to identify items for subsequent functional analyses. Four conditions were compared, attention with and without leisure items and control with and without leisure items. Following this, either high- or low-preference items were included in the attention condition. Problem behavior was more probable during the attention condition when no leisure items or low-preference items were included, and lower levels of problem behavior were observed during the attention condition when high-preference leisure items were included. These findings suggest how preferred items may hinder detection of behavioral function.
An optimal search filter for retrieving systematic reviews and meta-analyses
2012-01-01
Background Health-evidence.ca is an online registry of systematic reviews evaluating the effectiveness of public health interventions. Extensive searching of bibliographic databases is required to keep the registry up to date. However, search filters have been developed to assist in searching the extensive amount of published literature indexed. Search filters can be designed to find literature related to a certain subject (i.e. content-specific filter) or particular study designs (i.e. methodological filter). The objective of this paper is to describe the development and validation of the health-evidence.ca Systematic Review search filter and to compare its performance to other available systematic review filters. Methods This analysis of search filters was conducted in MEDLINE, EMBASE, and CINAHL. The performance of thirty-one search filters in total was assessed. A validation data set of 219 articles indexed between January 2004 and December 2005 was used to evaluate performance on sensitivity, specificity, precision and the number needed to read for each filter. Results Nineteen of 31 search filters were effective in retrieving a high level of relevant articles (sensitivity scores greater than 85%). The majority achieved a high degree of sensitivity at the expense of precision and yielded large result sets. The main advantage of the health-evidence.ca Systematic Review search filter in comparison to the other filters was that it maintained the same level of sensitivity while reducing the number of articles that needed to be screened. Conclusions The health-evidence.ca Systematic Review search filter is a useful tool for identifying published systematic reviews, with further screening to identify those evaluating the effectiveness of public health interventions. The filter that narrows the focus saves considerable time and resources during updates of this online resource, without sacrificing sensitivity. PMID:22512835
Jahn, Ingeborg; Börnhorst, Claudia; Günther, Frauke; Brand, Tilman
2017-02-15
During the last decades, sex and gender biases have been identified in various areas of biomedical and public health research, leading to compromised validity of research findings. As a response, methodological requirements were developed but these are rarely translated into research practice. The aim of this study is to provide good practice examples of sex/gender sensitive health research. We conducted a systematic search of research articles published in JECH between 2006 and 2014. An instrument was constructed to evaluate sex/gender sensitivity in four stages of the research process (background, study design, statistical analysis, discussion). In total, 37 articles covering diverse topics were included. Thereof, 22 were evaluated as good practice example in at least one stage; two articles achieved highest ratings across all stages. Good examples of the background referred to available knowledge on sex/gender differences and sex/gender informed theoretical frameworks. Related to the study design, good examples calculated sample sizes to be able to detect sex/gender differences, selected sex/gender sensitive outcome/exposure indicators, or chose different cut-off values for male and female participants. Good examples of statistical analyses used interaction terms with sex/gender or different shapes of the estimated relationship for men and women. Examples of good discussions interpreted their findings related to social and biological explanatory models or questioned the statistical methods used to detect sex/gender differences. The identified good practice examples may inspire researchers to critically reflect on the relevance of sex/gender issues of their studies and help them to translate methodological recommendations of sex/gender sensitivity into research practice.
Evaluation of uncertainties in the CRCM-simulated North American climate
NASA Astrophysics Data System (ADS)
de Elía, Ramón; Caya, Daniel; Côté, Hélène; Frigon, Anne; Biner, Sébastien; Giguère, Michel; Paquin, Dominique; Harvey, Richard; Plummer, David
2008-02-01
This work is a first step in the analysis of uncertainty sources in the RCM-simulated climate over North America. Three main sets of sensitivity studies were carried out: the first estimates the magnitude of internal variability, which is needed to evaluate the significance of changes in the simulated climate induced by any model modification. The second is devoted to the role of CRCM configuration as a source of uncertainty, in particular the sensitivity to nesting technique, domain size, and driving reanalysis. The third study aims to assess the relative importance of the previously estimated sensitivities by performing two additional sensitivity experiments: one, in which the reanalysis driving data is replaced by data generated by the second generation Coupled Global Climate Model (CGCM2), and another, in which a different CRCM version is used. Results show that the internal variability, triggered by differences in initial conditions, is much smaller than the sensitivity to any other source. Results also show that levels of uncertainty originating from liberty of choices in the definition of configuration parameters are comparable among themselves and are smaller than those due to the choice of CGCM or CRCM version used. These results suggest that uncertainty originated by the CRCM configuration latitude (freedom of choice among domain sizes, nesting techniques and reanalysis dataset), although important, does not seem to be a major obstacle to climate downscaling. Finally, with the aim of evaluating the combined effect of the different uncertainties, the ensemble spread is estimated for a subset of the analysed simulations. Results show that downscaled surface temperature is in general more uncertain in the northern regions, while precipitation is more uncertain in the central and eastern US.
Quantitative aspects of inductively coupled plasma mass spectrometry
NASA Astrophysics Data System (ADS)
Bulska, Ewa; Wagner, Barbara
2016-10-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.
Systems engineering and integration: Cost estimation and benefits analysis
NASA Technical Reports Server (NTRS)
Dean, ED; Fridge, Ernie; Hamaker, Joe
1990-01-01
Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.
Lee, I-Jung; Huang, Shih-Yu; Tsou, Mei-Yung; Chan, Kwok-Hon; Chang, Kuang-Yi
2010-10-01
Data collection systems are very important for the practice of patient-controlled analgesia (PCA). This study aimed to evaluate 3 PCA data collection systems and selected the most favorable system with the aid of multiattribute utility (MAU) theory. We developed a questionnaire with 10 items to evaluate the PCA data collection system and 1 item for overall satisfaction based on MAU theory. Three systems were compared in the questionnaire, including a paper record, optic card reader and personal digital assistant (PDA). A pilot study demonstrated a good internal and test-retest reliability of the questionnaire. A weighted utility score combining the relative importance of individual items assigned by each participant and their responses to each question was calculated for each system. Sensitivity analyses with distinct weighting protocols were conducted to evaluate the stability of the final results. Thirty potential users of a PCA data collection system were recruited in the study. The item "easy to use" had the highest median rank and received the heaviest mean weight among all items. MAU analysis showed that the PDA system had a higher utility score than that in the other 2 systems. Sensitivity analyses revealed that both inverse and reciprocal weighting processes favored the PDA system. High correlations between overall satisfaction and MAU scores from miscellaneous weighting protocols suggested a good predictive validity of our MAU-based questionnaire. The PDA system was selected as the most favorable PCA data collection system by the MAU analysis. The item "easy to use" was the most important attribute of the PCA data collection system. MAU theory can evaluate alternatives by taking into account individual preferences of stakeholders and aid in better decision-making. Copyright © 2010 Elsevier. Published by Elsevier B.V. All rights reserved.
Blumenthal, Kimberly G.; Parker, Robert A.; Shenoy, Erica S.; Walensky, Rochelle P.
2015-01-01
Background. Methicillin-sensitive Staphylococcus aureus (MSSA) bacteremia is a morbid infection. First-line MSSA therapies (nafcillin, oxacillin, cefazolin) are generally avoided in the 10% of patients reporting penicillin (PCN) allergy, but most of these patients are not truly allergic. We used a decision tree with sensitivity analyses to determine the optimal evaluation and treatment for patients with MSSA bacteremia and reported PCN allergy. Methods. Our model simulates 3 strategies: (1) no allergy evaluation, give vancomycin (Vanc); (2) allergy history–guided treatment: if history excludes anaphylactic features, give cefazolin (Hx-Cefaz); and (3) complete allergy evaluation with history-appropriate PCN skin testing: if skin test negative, give cefazolin (ST-Cefaz). Model outcomes included 12-week MSSA cure, recurrence, and death; allergic reactions including major, minor, and potentially iatrogenic; and adverse drug reactions. Results. Vanc results in the fewest patients achieving MSSA cure and the highest rate of recurrence (67.3%/14.8% vs 83.4%/9.3% for Hx-Cefaz and 84.5%/8.9% for ST-Cefaz) as well as the greatest frequency of allergic reactions (3.0% vs 2.4% for Hx-Cefaz and 1.7% for ST-Cefaz) and highest rates of adverse drug reactions (5.2% vs 4.6% for Hx-Cefaz and 4.7% for ST-Cefaz). Even in a “best case for Vanc” scenario, Vanc yields the poorest outcomes. ST-Cefaz is preferred to Hx-Cefaz although sensitive to input variations. Conclusions. Patients with MSSA bacteremia and a reported PCN allergy should have the allergy addressed for optimal treatment. Full allergy evaluation with skin testing seems to be preferred, although more data are needed. PMID:25991471
Blumenthal, Kimberly G; Parker, Robert A; Shenoy, Erica S; Walensky, Rochelle P
2015-09-01
Methicillin-sensitive Staphylococcus aureus (MSSA) bacteremia is a morbid infection. First-line MSSA therapies (nafcillin, oxacillin, cefazolin) are generally avoided in the 10% of patients reporting penicillin (PCN) allergy, but most of these patients are not truly allergic. We used a decision tree with sensitivity analyses to determine the optimal evaluation and treatment for patients with MSSA bacteremia and reported PCN allergy. Our model simulates 3 strategies: (1) no allergy evaluation, give vancomycin (Vanc); (2) allergy history-guided treatment: if history excludes anaphylactic features, give cefazolin (Hx-Cefaz); and (3) complete allergy evaluation with history-appropriate PCN skin testing: if skin test negative, give cefazolin (ST-Cefaz). Model outcomes included 12-week MSSA cure, recurrence, and death; allergic reactions including major, minor, and potentially iatrogenic; and adverse drug reactions. Vanc results in the fewest patients achieving MSSA cure and the highest rate of recurrence (67.3%/14.8% vs 83.4%/9.3% for Hx-Cefaz and 84.5%/8.9% for ST-Cefaz) as well as the greatest frequency of allergic reactions (3.0% vs 2.4% for Hx-Cefaz and 1.7% for ST-Cefaz) and highest rates of adverse drug reactions (5.2% vs 4.6% for Hx-Cefaz and 4.7% for ST-Cefaz). Even in a "best case for Vanc" scenario, Vanc yields the poorest outcomes. ST-Cefaz is preferred to Hx-Cefaz although sensitive to input variations. Patients with MSSA bacteremia and a reported PCN allergy should have the allergy addressed for optimal treatment. Full allergy evaluation with skin testing seems to be preferred, although more data are needed. © The Author 2015. Published by Oxford University Press on behalf of the Infectious Diseases Society of America. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Methods of recording and analysing cough sounds.
Subburaj, S; Parvez, L; Rajagopalan, T G
1996-01-01
Efforts have been directed to evolve a computerized system for acquisition and multi-dimensional analysis of the cough sound. The system consists of a PC-AT486 computer with an ADC board having 12 bit resolution. The audio cough sound is acquired using a sensitive miniature microphone at a sampling rate of 8 kHz in the computer and simultaneously recorded in real time using a digital audio tape recorder which also serves as a back up. Analysis of the cough sound is done in time and frequency domains using the digitized data which provide numerical values for key parameters like cough counts, bouts, their intensity and latency. In addition, the duration of each event and cough patterns provide a unique tool which allows objective evaluation of antitussive and expectorant drugs. Both on-line and off-line checks ensure error-free performance over long periods of time. The entire system has been evaluated for sensitivity, accuracy, precision and reliability. Successful use of this system in clinical studies has established what perhaps is the first integrated approach for the objective evaluation of cough.
Monahan, M; Ensor, J; Moore, D; Fitzmaurice, D; Jowett, S
2017-08-01
Essentials Correct duration of treatment after a first unprovoked venous thromboembolism (VTE) is unknown. We assessed when restarting anticoagulation was worthwhile based on patient risk of recurrent VTE. When the risk over a one-year period is 17.5%, restarting is cost-effective. However, sensitivity analyses indicate large uncertainty in the estimates. Background Following at least 3 months of anticoagulation therapy after a first unprovoked venous thromboembolism (VTE), there is uncertainty about the duration of therapy. Further anticoagulation therapy reduces the risk of having a potentially fatal recurrent VTE but at the expense of a higher risk of bleeding, which can also be fatal. Objective An economic evaluation sought to estimate the long-term cost-effectiveness of using a decision rule for restarting anticoagulation therapy vs. no extension of therapy in patients based on their risk of a further unprovoked VTE. Methods A Markov patient-level simulation model was developed, which adopted a lifetime time horizon with monthly time cycles and was from a UK National Health Service (NHS)/Personal Social Services (PSS) perspective. Results Base-case model results suggest that treating patients with a predicted 1 year VTE risk of 17.5% or higher may be cost-effective if decision makers are willing to pay up to £20 000 per quality adjusted life year (QALY) gained. However, probabilistic sensitivity analysis shows that the model was highly sensitive to overall parameter uncertainty and caution is warranted in selecting the optimal decision rule on cost-effectiveness grounds. Univariate sensitivity analyses indicate variables such as anticoagulation therapy disutility and mortality risks were very influential in driving model results. Conclusion This represents the first economic model to consider the use of a decision rule for restarting therapy for unprovoked VTE patients. Better data are required to predict long-term bleeding risks during therapy in this patient group. © 2017 International Society on Thrombosis and Haemostasis.
García-Domene, M C; Luque, M J; Díez-Ajenjo, M A; Desco-Esteban, M C; Artigas, J M
2018-02-01
To analyse the relationship between the choroidal thickness and the visual perception of patients with high myopia but without retinal damage. All patients underwent ophthalmic evaluation including a slit lamp examination and dilated ophthalmoscopy, subjective refraction, best corrected visual acuity, axial length, optical coherence tomography, contrast sensitivity function and sensitivity of the visual pathways. We included eleven eyes of subjects with high myopia. There are statistical correlations between choroidal thickness and almost all the contrast sensitivity values. The sensitivity of magnocellular and koniocellular pathways is the most affected, and the homogeneity of the sensibility of the magnocellular pathway depends on the choroidal thickness; when the thickness decreases, the sensitivity impairment extends from the center to the periphery of the visual field. Patients with high myopia without any fundus changes have visual impairments. We have found that choroidal thickness correlates with perceptual parameters such as contrast sensitivity or mean defect and pattern standard deviation of the visual fields of some visual pathways. Our study shows that the magnocellular and koniocellular pathways are the most affected, so that these patients have impairment in motion perception and blue-yellow contrast perception. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
Anaphylaxis to Gelofusine confirmed by in vitro basophil activation test: a case series.
Apostolou, E; Deckert, K; Puy, R; Sandrini, A; de Leon, M P; Douglass, J A; Rolland, J M; O'hehir, R E
2006-03-01
The plasma expander Gelofusine (succinylated gelatin) is a recognised cause of peri-operative anaphylaxis. Current diagnosis of Gelofusine sensitivity is by skin testing, a procedure that itself carries a risk of allergic reaction. We evaluated the reliability of the in vitro basophil activation test as a diagnostic assay for Gelofusine sensitivity in subjects with a clinical history highly suggestive of Gelofusine allergy. Six patients with peri-operative anaphylaxis clinically attributed to Gelofusine were skin tested to confirm sensitivity. Control subjects included three healthy subjects and five subjects allergic to a neuromuscular blocking drug, all negative on Gelofusine skin testing. Whole blood basophil activation to Gelofusine was analysed by flow cytometry for CD63 surface expression. All of the Gelofusine sensitive patients and one of the control allergic subjects showed positive basophil activation to Gelofusine. In this series of subjects, the basophil activation test for Gelofusine allergy had a sensitivity of 100% and a specificity of 87.5%. Our findings suggest that basophil activation testing is a safe and reliable in vitro assay for prediction or confirmation of Gelofusine sensitivity in patients with high clinical suspicion of Gelofusine-induced anaphylaxis.
Hess, Lisa M; Rajan, Narayan; Winfree, Katherine; Davey, Peter; Ball, Mark; Knox, Hediyyih; Graham, Christopher
2015-12-01
Health technology assessment is not required for regulatory submission or approval in either the United States (US) or Japan. This study was designed as a cross-country evaluation of cost analyses conducted in the US and Japan based on the PRONOUNCE phase III lung cancer trial, which compared pemetrexed plus carboplatin followed by pemetrexed (PemC) versus paclitaxel plus carboplatin plus bevacizumab followed by bevacizumab (PCB). Two cost analyses were conducted in accordance with International Society For Pharmacoeconomics and Outcomes Research good research practice standards. Costs were obtained based on local pricing structures; outcomes were considered equivalent based on the PRONOUNCE trial results. Other inputs were included from the trial data (e.g., toxicity rates) or from local practice sources (e.g., toxicity management). The models were compared across key input and transferability factors. Despite differences in local input data, both models demonstrated a similar direction, with the cost of PemC being consistently lower than the cost of PCB. The variation in individual input parameters did affect some of the specific categories, such as toxicity, and impacted sensitivity analyses, with the cost differential between comparators being greater in Japan than in the US. When economic models are based on clinical trial data, many inputs and outcomes are held consistent. The alterable inputs were not in and of themselves large enough to significantly impact the results between countries, which were directionally consistent with greater variation seen in sensitivity analyses. The factors that vary across jurisdictions, even when minor, can have an impact on trial-based economic analyses. Eli Lilly and Company.
Papageorgiou, Spyridon N; Konstantinidis, Ioannis; Papadopoulou, Konstantina; Jäger, Andreas; Bourauel, Christoph
2014-06-01
Fixed-appliance treatment is a major part of orthodontic treatment, but clinical evidence remains scarce. Objective of this systematic review was to investigate how the therapeutic effects and side-effects of brackets used during the fixed-appliance orthodontic treatment are affected by their characteristics. SEARCH METHODS AND SELECTION CRITERIA: We searched MEDLINE and 18 other databases through April 2012 without restrictions for randomized controlled trials and quasi-randomized controlled trials investigating any bracket characteristic. After duplicate selection and extraction procedures, risk of bias was assessed also in duplicate according to Cochrane guidelines and quality of evidence according to the Grades of Recommendation. Assessment, Development and Evaluation approach. Random-effects meta-analyses, subgroup analyses, and sensitivity analyses were performed with the corresponding 95 per cent confidence intervals (CI) and 95 per cent prediction intervals (PI). We included 25 trials on 1321 patients, with most comparing self-ligated (SL) and conventional brackets. Based on the meta-analyses, the duration of orthodontic treatment was on average 2.01 months longer among patients with SL brackets (95 per cent CI: 0.45 to 3.57). The 95 per cent PIs for a future trial indicated that the difference could be considerable (-1.46 to 5.47 months). Treatment characteristics, outcomes, and side-effects were clinically similar between SL and conventional brackets. For most bracket characteristics, evidence is insufficient. Some meta-analyses included trials with high risk of bias, but sensitivity analyses indicated robustness. Based on existing evidence, no clinical recommendation can be made regarding the bracket material or different ligation modules. For SL brackets, no conclusive benefits could be proven, while their use was associated with longer treatment durations.
Carr, Robert S.; Nipper, Marion; Field, Michael; Biedenbach, James M.
2006-01-01
Toxicity tests are commonly conducted as a measure of the bioavailability of toxic chemicals to biota in an environment. Chemical analyses alone are insufficient to determine whether contaminants pose a threat to biota. Porewater toxicity tests are extremely sensitive to a broad range of contaminants in marine environments and provide ecologically relevant data on sensitive life stages. The inclusion of porewater toxicity testing as an additional indicator of sediment quality provides a more comprehensive picture of contaminant effects in these sensitive habitats. In this study purple-spined sea urchin (Arbacia punctulata) fertilization and embryological development porewater toxicity tests were used to evaluate the sediments collected from the coastal environment around Hanalei Bay, Kaua’i, Hawaii. These tests have been used previously to assess the bioavailability of contaminants associated with sediments in the vicinity of coral reefs.
Kondrashina, Alina V; Papkovsky, Dmitri B; Dmitriev, Ruslan I
2013-09-07
Measurement of cell oxygenation and oxygen consumption is useful for studies of cell bioenergetics, metabolism, mitochondrial function, drug toxicity and common pathophysiological conditions. Here we present a new platform for such applications which uses commercial multichannel biochips (μ-slides, Ibidi) and phosphorescent O2 sensitive probes. This platform was evaluated with both extracellular and intracellular O2 probes, several different cell types and treatments including mitochondrial uncoupling and inhibition, depletion of extracellular Ca(2+) and inhibition of V-ATPase and histone deacetylases. The results show that compared to the standard microwell plates currently used, the μ-slide platform provides facile O2 measurements with both suspension and adherent cells, higher sensitivity and reproducibility, and faster measurement time. It also allows re-perfusion and multiple treatments of cells and multi-parametric analyses in conjunction with other probes. Optical measurements are conducted on standard fluorescence readers and microscopes.
Revenue Potential for Inpatient IR Consultation Services: A Financial Model.
Misono, Alexander S; Mueller, Peter R; Hirsch, Joshua A; Sheridan, Robert M; Siddiqi, Assad U; Liu, Raymond W
2016-05-01
Interventional radiology (IR) has historically failed to fully capture the value of evaluation and management services in the inpatient setting. Understanding financial benefits of a formally incorporated billing discipline may yield meaningful insights for interventional practices. A revenue modeling tool was created deploying standard financial modeling techniques, including sensitivity and scenario analyses. Sensitivity analysis calculates revenue fluctuation related to dynamic adjustment of discrete variables. In scenario analysis, possible future scenarios as well as revenue potential of different-size clinical practices are modeled. Assuming a hypothetical inpatient IR consultation service with a daily patient census of 35 patients and two new consults per day, the model estimates annual charges of $2.3 million and collected revenue of $390,000. Revenues are most sensitive to provider billing documentation rates and patient volume. A range of realistic scenarios-from cautious to optimistic-results in a range of annual charges of $1.8 million to $2.7 million and a collected revenue range of $241,000 to $601,000. Even a small practice with a daily patient census of 5 and 0.20 new consults per day may expect annual charges of $320,000 and collected revenue of $55,000. A financial revenue modeling tool is a powerful adjunct in understanding economics of an inpatient IR consultation service. Sensitivity and scenario analyses demonstrate a wide range of revenue potential and uncover levers for financial optimization. Copyright © 2016 SIR. Published by Elsevier Inc. All rights reserved.
Lederer, David J; Bradford, Williamson Z; Fagan, Elizabeth A; Glaspole, Ian; Glassberg, Marilyn K; Glasscock, Kenneth F; Kardatzke, David; King, Talmadge E; Lancaster, Lisa H; Nathan, Steven D; Pereira, Carlos A; Sahn, Steven A; Swigris, Jeffrey J; Noble, Paul W
2015-07-01
FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov.
Population and High-Risk Group Screening for Glaucoma: The Los Angeles Latino Eye Study
Francis, Brian A.; Vigen, Cheryl; Lai, Mei-Ying; Winarko, Jonathan; Nguyen, Betsy; Azen, Stanley
2011-01-01
Purpose. To evaluate the ability of various screening tests, both individually and in combination, to detect glaucoma in the general Latino population and high-risk subgroups. Methods. The Los Angeles Latino Eye Study is a population-based study of eye disease in Latinos 40 years of age and older. Participants (n = 6082) underwent Humphrey visual field testing (HVF), frequency doubling technology (FDT) perimetry, measurement of intraocular pressure (IOP) and central corneal thickness (CCT), and independent assessment of optic nerve vertical cup disc (C/D) ratio. Screening parameters were evaluated for three definitions of glaucoma based on optic disc, visual field, and a combination of both. Analyses were also conducted for high-risk subgroups (family history of glaucoma, diabetes mellitus, and age ≥65 years). Sensitivity, specificity, and receiver operating characteristic curves were calculated for those continuous parameters independently associated with glaucoma. Classification and regression tree (CART) analysis was used to develop a multivariate algorithm for glaucoma screening. Results. Preset cutoffs for screening parameters yielded a generally poor balance of sensitivity and specificity (sensitivity/specificity for IOP ≥21 mm Hg and C/D ≥0.8 was 0.24/0.97 and 0.60/0.98, respectively). Assessment of high-risk subgroups did not improve the sensitivity/specificity of individual screening parameters. A CART analysis using multiple screening parameters—C/D, HVF, and IOP—substantially improved the balance of sensitivity and specificity (sensitivity/specificity 0.92/0.92). Conclusions. No single screening parameter is useful for glaucoma screening. However, a combination of vertical C/D ratio, HVF, and IOP provides the best balance of sensitivity/specificity and is likely to provide the highest yield in glaucoma screening programs. PMID:21245400
Goeree, Ron; Chiva-Razavi, Sima; Gunda, Praveen; Graham, Christopher N; Miles, LaStella; Nikoglou, Efthalia; Jugl, Steffen M; Gladman, Dafna D
2018-02-01
The study evaluates the cost-effectiveness of secukinumab, a fully human monoclonal antibody that selectively neutralizes interleukin (IL)-17A, vs currently licensed biologic treatments in patients with active psoriatic arthritis (PsA) from a Canadian healthcare system perspective. A decision analytic semi-Markov model evaluated the cost-effectiveness of secukinumab 150 mg and 300 mg compared to subcutaneous biologics adalimumab, certolizumab pegol, etanercept, golimumab, and ustekinumab, and intravenous biologics infliximab and infliximab biosimilar in biologic-naive and biologic-experienced patients over a lifetime horizon. The response to treatments was evaluated after 12 weeks by PsA Response Criteria (PsARC) response rates. Non-responders or patients discontinuing initial-line of biologic treatment were allowed to switch to subsequent-line biologics. Model input parameters (Psoriasis Area Severity Index [PASI], Health Assessment Questionnaire [HAQ], withdrawal rates, costs, and resource use) were collected from clinical trials, published literature, and other Canadian sources. Benefits were expressed as quality-adjusted life years (QALYs). An annual discount rate of 5% was applied to costs and benefits. The robustness of the study findings were evaluated via sensitivity analyses. Biologic-naive patients treated with secukinumab achieved the highest number of QALYs (8.54) at the lowest cost (CAD 925,387) over a lifetime horizon vs all comparators. Secukinumab dominated all treatments, except for infliximab and its biosimilar, which achieved minimally more QALYs (8.58). However, infliximab and its biosimilar incurred more costs than secukinumab (infliximab: CAD 1,015,437; infliximab biosimilar: CAD 941,004), resulting in higher cost-effectiveness estimates relative to secukinumab. In the biologic-experienced population, secukinumab dominated all treatments as it generated more QALYs (8.89) at lower costs (CAD 954,692). Deterministic sensitivity analyses indicated the results were most sensitive to variation in PsARC response rates, change in HAQ, and utility values in both populations. Secukinumab is either dominant or cost-effective vs all licensed biologics for the treatment of active PsA in biologic-naive and biologic-experienced populations in Canada.
Huang, Yuan-sheng; Yang, Zhi-rong; Zhan, Si-yan
2015-06-18
To investigate the use of simple pooling and bivariate model in meta-analyses of diagnostic test accuracy (DTA) published in Chinese journals (January to November, 2014), compare the differences of results from these two models, and explore the impact of between-study variability of sensitivity and specificity on the differences. DTA meta-analyses were searched through Chinese Biomedical Literature Database (January to November, 2014). Details in models and data for fourfold table were extracted. Descriptive analysis was conducted to investigate the prevalence of the use of simple pooling method and bivariate model in the included literature. Data were re-analyzed with the two models respectively. Differences in the results were examined by Wilcoxon signed rank test. How the results differences were affected by between-study variability of sensitivity and specificity, expressed by I2, was explored. The 55 systematic reviews, containing 58 DTA meta-analyses, were included and 25 DTA meta-analyses were eligible for re-analysis. Simple pooling was used in 50 (90.9%) systematic reviews and bivariate model in 1 (1.8%). The remaining 4 (7.3%) articles used other models pooling sensitivity and specificity or pooled neither of them. Of the reviews simply pooling sensitivity and specificity, 41(82.0%) were at the risk of wrongly using Meta-disc software. The differences in medians of sensitivity and specificity between two models were both 0.011 (P<0.001, P=0.031 respectively). Greater differences could be found as I2 of sensitivity or specificity became larger, especially when I2>75%. Most DTA meta-analyses published in Chinese journals(January to November, 2014) combine the sensitivity and specificity by simple pooling. Meta-disc software can pool the sensitivity and specificity only through fixed-effect model, but a high proportion of authors think it can implement random-effect model. Simple pooling tends to underestimate the results compared with bivariate model. The greater the between-study variance is, the more likely the simple pooling has larger deviation. It is necessary to increase the knowledge level of statistical methods and software for meta-analyses of DTA data.
Giljaca, Vanja; Nadarevic, Tin; Poropat, Goran; Nadarevic, Vesna Stefanac; Stimac, Davor
2017-03-01
To determine the diagnostic accuracy of abdominal ultrasound (US) for the diagnosis of acute appendicitis (AA), in terms of sensitivity, specificity and post-test probabilities for positive and negative result. A systematic search of MEDLINE, Embase, The Cochrane library and Science Citation Index Expanded from January 1994 to October 2014 was performed. Two authors independently evaluated studies for inclusion, extracted data and performed analyses. The reference standard for evaluation of final diagnosis was pathohistological report on tissue obtained at appendectomy. Summary sensitivity, specificity and post-test probability of AA after positive and negative result of US with corresponding 95% confidence intervals (CI) were calculated. Out of 3306 references identified through electronic searches, 17 reports met the inclusion criteria, with 2841 included participants. The summary sensitivity and specificity of US for diagnosis of AA were 69% (95% CI 59-78%) and 81% (95% CI 73-88%), respectively. At the median pretest probability of AA of 76.4%, the post-test probability for a positive and negative result of US was 92% (95% CI 88-95%) and 55% (95% CI 46-63%), respectively. Abdominal ultrasound does not seem to have a role in the diagnostic pathway for diagnosis of AA in suspected patients. The summary sensitivity and specificity of US do not exceed that of physical examination. Patients that require additional diagnostic workup should be referred to more sensitive and specific diagnostic procedures, such as computed tomography.
Holcombe, Andrea; Ammann, Eric; Espeland, Mark A; Kelley, Brendan J; Manson, JoAnn E; Wallace, Robert; Robinson, Jennifer
2017-10-01
To investigate the relationship between aspirin and subclinical cerebrovascular heath, we evaluated the effect of chronic aspirin use on white matter lesions (WML) volume among women. Chronic aspirin use was assessed in 1365 women who participated in the Women's Health Initiative Memory Study of Magnetic Resonance Imaging. Differences in WML volumes between aspirin users and nonusers were assessed with linear mixed models. A number of secondary analyses were performed, including lobe-specific analyses, subgroup analyses based on participants' overall risk of cerebrovascular disease, and a dose-response relationship analysis. The mean age of the women at magnetic resonance imaging examination was 77.6 years. Sixty-one percent of participants were chronic aspirin users. After adjusting for demographic variables and comorbidities, chronic aspirin use was nonsignificantly associated with 4.8% (95% CI: -6.8%, 17.9%) larger WML volumes. These null findings were confirmed in secondary and sensitivity analyses, including an active comparator evaluation where aspirin users were compared to users of nonaspirin nonsteroidal anti-inflammatory drugs or acetaminophen. There was a nonsignificant difference in WML volumes between aspirin users and nonusers. Further, our results suggest that chronic aspirin use may not have a clinically significant effect on WML volumes in women. Published by Elsevier Inc.
Zibaei, Mohammad; Sadjjadi, Seyed Mahmoud; Sarkari, Bahador; Uga, Shoji
2016-05-01
Toxocariasis is the clinical term that is applied to infection in the human host with Toxocara species larvae. Serological tests are important tools for the diagnosis of toxocariasis. The aim of this study was to evaluate the excretory-secretory (ES) antigens of T. cati larvae using enzyme-linked immunosorbent assay (ELISA) and also Western blotting for serodiagnosis of human toxocariasis. The ES antigens were prepared from T. cati third-stage larvae. Serum samples were obtained from 33 confirmed cases of toxocariasis, 35 patients infected with other parasitic diseases, and 30 from healthy individuals tested with ELISA and immunoblotting. The ELISA showed appropriate performance in term of specificity (96.7%) and sensitivity (97.0%). Electrophoretic analysis of T. cati ES antigens revealed a range of 20- to 150-kDa fractions. The highest sensitivity was achieved with 42- and 50-kDa fractions. The ELISA analyses using T. cati ES antigens demonstrated good sensitivity and specificity compared to T. canis ES as antigens for diagnosis of human toxocariasis. Accordingly, application of Western blotting, based on 42- and 50-kDa fractions of ES antigens, can be recommended for the accurate diagnosis of toxocariasis. © 2015 Wiley Periodicals, Inc.
Beyer, Sebastian E; Hunink, Myriam G; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F; Sommer, Wieland H
2015-07-01
This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80,000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80,000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80,000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. © 2015 The Authors.
Pang, Y-K; Ip, M; You, J H S
2017-01-01
Early initiation of antifungal treatment for invasive candidiasis is associated with change in mortality. Beta-D-glucan (BDG) is a fungal cell wall component and a serum diagnostic biomarker of fungal infection. Clinical findings suggested an association between reduced invasive candidiasis incidence in intensive care units (ICUs) and BDG-guided preemptive antifungal therapy. We evaluated the potential cost-effectiveness of active BDG surveillance with preemptive antifungal therapy in patients admitted to adult ICUs from the perspective of Hong Kong healthcare providers. A Markov model was designed to simulate the outcomes of active BDG surveillance with preemptive therapy (surveillance group) and no surveillance (standard care group). Candidiasis-associated outcome measures included mortality rate, quality-adjusted life year (QALY) loss, and direct medical cost. Model inputs were derived from the literature. Sensitivity analyses were conducted to evaluate the robustness of model results. In base-case analysis, the surveillance group was more costly (1387 USD versus 664 USD) (1 USD = 7.8 HKD), with lower candidiasis-associated mortality rate (0.653 versus 1.426 per 100 ICU admissions) and QALY loss (0.116 versus 0.254) than the standard care group. The incremental cost per QALY saved by the surveillance group was 5239 USD/QALY. One-way sensitivity analyses found base-case results to be robust to variations of all model inputs. In probabilistic sensitivity analysis, the surveillance group was cost-effective in 50 % and 100 % of 10,000 Monte Carlo simulations at willingness-to-pay (WTP) thresholds of 7200 USD/QALY and ≥27,800 USD/QALY, respectively. Active BDG surveillance with preemptive therapy appears to be highly cost-effective to reduce the candidiasis-associated mortality rate and save QALYs in the ICU setting.
Cost-effectiveness analysis of neurocognitive-sparing treatments for brain metastases.
Savitz, Samuel T; Chen, Ronald C; Sher, David J
2015-12-01
Decisions regarding how to treat patients who have 1 to 3 brain metastases require important tradeoffs between controlling recurrences, side effects, and costs. In this analysis, the authors compared novel treatments versus usual care to determine the incremental cost-effectiveness ratio from a payer's (Medicare) perspective. Cost-effectiveness was evaluated using a microsimulation of a Markov model for 60 one-month cycles. The model used 4 simulated cohorts of patients aged 65 years with 1 to 3 brain metastases. The 4 cohorts had a median survival of 3, 6, 12, and 24 months to test the sensitivity of the model to different prognoses. The treatment alternatives evaluated included stereotactic radiosurgery (SRS) with 3 variants of salvage after recurrence (whole-brain radiotherapy [WBRT], hippocampal avoidance WBRT [HA-WBRT], SRS plus WBRT, and SRS plus HA-WBRT). The findings were tested for robustness using probabilistic and deterministic sensitivity analyses. Traditional radiation therapies remained cost-effective for patients in the 3-month and 6-month cohorts. In the cohorts with longer median survival, HA-WBRT and SRS plus HA-WBRT became cost-effective relative to traditional treatments. When the treatments that involved HA-WBRT were excluded, either SRS alone or SRS plus WBRT was cost-effective relative to WBRT alone. The deterministic and probabilistic sensitivity analyses confirmed the robustness of these results. HA-WBRT and SRS plus HA-WBRT were cost-effective for 2 of the 4 cohorts, demonstrating the value of controlling late brain toxicity with this novel therapy. Cost-effectiveness depended on patient life expectancy. SRS was cost-effective in the cohorts with short prognoses (3 and 6 months), whereas HA-WBRT and SRS plus HA-WBRT were cost-effective in the cohorts with longer prognoses (12 and 24 months). © 2015 American Cancer Society.
Different Imaging Strategies in Patients With Possible Basilar Artery Occlusion
Beyer, Sebastian E.; Hunink, Myriam G.; Schöberl, Florian; von Baumgarten, Louisa; Petersen, Steffen E.; Dichgans, Martin; Janssen, Hendrik; Ertl-Wagner, Birgit; Reiser, Maximilian F.
2015-01-01
Background and Purpose— This study evaluated the cost-effectiveness of different noninvasive imaging strategies in patients with possible basilar artery occlusion. Methods— A Markov decision analytic model was used to evaluate long-term outcomes resulting from strategies using computed tomographic angiography (CTA), magnetic resonance imaging, nonenhanced CT, or duplex ultrasound with intravenous (IV) thrombolysis being administered after positive findings. The analysis was performed from the societal perspective based on US recommendations. Input parameters were derived from the literature. Costs were obtained from United States costing sources and published literature. Outcomes were lifetime costs, quality-adjusted life-years (QALYs), incremental cost-effectiveness ratios, and net monetary benefits, with a willingness-to-pay threshold of $80 000 per QALY. The strategy with the highest net monetary benefit was considered the most cost-effective. Extensive deterministic and probabilistic sensitivity analyses were performed to explore the effect of varying parameter values. Results— In the reference case analysis, CTA dominated all other imaging strategies. CTA yielded 0.02 QALYs more than magnetic resonance imaging and 0.04 QALYs more than duplex ultrasound followed by CTA. At a willingness-to-pay threshold of $80 000 per QALY, CTA yielded the highest net monetary benefits. The probability that CTA is cost-effective was 96% at a willingness-to-pay threshold of $80 000/QALY. Sensitivity analyses showed that duplex ultrasound was cost-effective only for a prior probability of ≤0.02 and that these results were only minimally influenced by duplex ultrasound sensitivity and specificity. Nonenhanced CT and magnetic resonance imaging never became the most cost-effective strategy. Conclusions— Our results suggest that CTA in patients with possible basilar artery occlusion is cost-effective. PMID:26022634
NASA Technical Reports Server (NTRS)
Bochem, J. H.; Mossman, D. C.; Lanier, P. D.
1977-01-01
The feasibility of incorporating optimal concepts into a practical system was determined. Various earlier theoretical analyses were confirmed, and insight was gained into the sensitivity of fuel conservation strategies to nonlinear and second order aerodynamic and engine characteristics. In addition to the investigation of optimal trajectories the study ascertained combined fuel savings by utilizing various procedure-oriented improvements such as delayed flap/decelerating approaches and great circle navigation.
Cost-effectiveness of bedaquiline in MDR and XDR tuberculosis in Italy
Codecasa, Luigi R.; Toumi, Mondher; D’Ausilio, Anna; Aiello, Andrea; Damele, Francesco; Termini, Roberta; Uglietti, Alessia; Hettle, Robert; Graziano, Giorgio; De Lorenzo, Saverio
2017-01-01
ABSTRACT Objective: To evaluate the cost-effectiveness of bedaquiline plus background drug regimens (BR) for multidrug-resistant tuberculosis (MDR-TB) and extensively drug-resistant tuberculosis (XDR-TB) in Italy. Methods: A Markov model was adapted to the Italian setting to estimate the incremental cost-effectiveness ratio (ICER) of bedaquiline plus BR (BBR) versus BR in the treatment of MDR-TB and XDR-TB over 10 years, from both the National Health Service (NHS) and societal perspective. Cost-effectiveness was evaluated in terms of life-years gained (LYG). Clinical data were sourced from trials; resource consumption for compared treatments was modelled according to advice from an expert clinicians panel. NHS tariffs for inpatient and outpatient resource consumption were retrieved from published Italian sources. Drug costs were provided by reference centres for disease treatment in Italy. A 3% annual discount was applied to both cost and effectiveness. Deterministic and probabilistic sensitivity analyses were conducted. Results: Over 10 years, BBR vs. BR alone is cost-effective, with ICERs of €16,639/LYG and €4081/LYG for the NHS and society, respectively. The sensitivity analyses confirmed the robustness of the results from both considered perspectives. Conclusion: In Italy, BBR vs. BR alone has proven to be cost-effective in the treatment of MDR-TB and XDR-TB under a range of scenarios. PMID:28265350
Sosic, Z; Gieler, U; Stangier, U
2008-06-01
To evaluate the German version of the Social Phobia Inventory (SPIN) as a screening device and to report corresponding cut-off scores for different populations. In Study 1, 2043 subjects from a representative sample completed the SPIN. Cut-off values were established on the basis of means and standard deviations. In Study 2, different aspects of validity were examined in a clinical sample comprising 164 subjects, including social phobic individuals, individuals with other anxiety disorders and depression, and non-clinical control subjects. Internal consistency was evaluated. Convergent and divergent validity were explored using several established measures. Finally, the sensitivity and specificity of the German SPIN with regard to social anxiety classification were investigated by means of receiver operating characteristics (ROC) analyses. In Study 1, mean scores and standard deviations were used to determine cut-off scores for the German SPIN. In Study 2, excellent internal consistency and good convergent and divergent validity were obtained. ROC analyses revealed that the German SPIN performed well in discriminating between social phobic individuals on the one hand and psychiatric and non-psychiatric controls on the other. A cut-off score of 25 represented the best balance between sensitivity and specificity. Comparable to the original version, the German SPIN demonstrates solid psychometric properties and shows promise as an economic, reliable, and valid screening device.
Studies on Early Allergic Sensitization in the Lithuanian Birth Cohort
Dubakiene, Ruta; Rudzeviciene, Odilija; Butiene, Indre; Sezaite, Indre; Petronyte, Malvina; Vaicekauskaite, Dalia; Zvirbliene, Aurelija
2012-01-01
Cohort studies are of great importance in defining the mechanism responsible for the development of allergy-associated diseases, such as atopic dermatitis, allergic asthma, and allergic rhinoconjunctivitis. Although these disorders share genetic and environmental risk factors, it is still under debate whether they are linked or develop sequentially along an atopic pathway. The current study was aimed to determine the pattern of allergy sensitization in the Lithuanian birth cohort “Alergemol” (n = 1558) established as a part of the multicenter European birth cohort “EuroPrevall”. Early sensitization to food allergens in the “Alergemol” birth cohort was analysed. The analysis revealed 1.3% and 2.8% of symptomatic-sensitized subjects at 6 and 12 months of age, respectively. The sensitization pattern in response to different allergens in the group of infants with food allergy symptoms was studied using allergological methods in vivo and in vitro. The impact of maternal and environmental risk factors on the early development of food allergy in at 6 and 12 months of age was evaluated. Our data showed that maternal diet, diseases, the use of antibiotics, and tobacco smoke during pregnancy had no significant impact on the early sensitization to food allergens. However, infants of atopic mothers were significantly more often sensitized to egg as compared to the infants of nonatopic mothers. PMID:22606067
Jia, Yongliang; Leung, Siu-Wai
2017-09-01
More than 230 randomized controlled trials (RCTs) of danshen dripping pill (DSP) and isosorbide dinitrate (ISDN) in treating angina pectoris after the first preferred reporting items for systematic reviews and meta-analyses-compliant comprehensive meta-analysis were published in 2010. Other meta-analyses had flaws in study selection, statistical meta-analysis, and evidence assessment. This study completed the meta-analysis with an extensive assessment of the evidence. RCTs published from 1994 to 2016 on DSP and ISDN in treating angina pectoris for at least 4 weeks were included. The risk of bias (RoB) of included RCTs was assessed with the Cochrane's tool for assessing RoB. Meta-analyses based on a random-effects model were performed on two outcome measures: symptomatic (SYM) and electrocardiography (ECG) improvements. Subgroup analysis, sensitivity analysis, metaregression, and publication bias analysis were also conducted. The evidence strength was evaluated with the Grades of Recommendation, Assessment, Development, and Evaluation (GRADE) method. Among the included 109 RCTs with 11,973 participants, 49 RCTs and 5042 participants were new (after 2010). The RoB of included RCTs was high in randomization and blinding. Overall effect sizes in odds ratios for DSP over ISDN were 2.94 (95% confidence interval [CI]: 2.53-3.41) on SYM (n = 108) and 2.37 (95% CI: 2.08-2.69) by ECG (n = 81) with significant heterogeneities (I 2 = 41%, p < 0.0001 on SYM and I 2 = 44%, p < 0.0001 on ECG). Subgroup, sensitivity, and metaregression analyses showed consistent results without publication bias. However, the evidence strength was low in GRADE. The efficacy of DSP was still better than ISDN in treating angina pectoris, but the confidence decreased due to high RoB and heterogeneities.
Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing
2016-01-01
Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108
Optimizing Chronic Disease Management Mega-Analysis
PATH-THETA Collaboration
2013-01-01
Background As Ontario’s population ages, chronic diseases are becoming increasingly common. There is growing interest in services and care models designed to optimize the management of chronic disease. Objective To evaluate the cost-effectiveness and expected budget impact of interventions in chronic disease cohorts evaluated as part of the Optimizing Chronic Disease Management mega-analysis. Data Sources Sector-specific costs, disease incidence, and mortality were calculated for each condition using administrative databases from the Institute for Clinical Evaluative Sciences. Intervention outcomes were based on literature identified in the evidence-based analyses. Quality-of-life and disease prevalence data were obtained from the literature. Methods Analyses were restricted to interventions that showed significant benefit for resource use or mortality from the evidence-based analyses. An Ontario cohort of patients with each chronic disease was constructed and followed over 5 years (2006–2011). A phase-based approach was used to estimate costs across all sectors of the health care system. Utility values identified in the literature and effect estimates for resource use and mortality obtained from the evidence-based analyses were applied to calculate incremental costs and quality-adjusted life-years (QALYs). Given uncertainty about how many patients would benefit from each intervention, a system-wide budget impact was not determined. Instead, the difference in lifetime cost between an individual-administered intervention and no intervention was presented. Results Of 70 potential cost-effectiveness analyses, 8 met our inclusion criteria. All were found to result in QALY gains and cost savings compared with usual care. The models were robust to the majority of sensitivity analyses undertaken, but due to structural limitations and time constraints, few sensitivity analyses were conducted. Incremental cost savings per patient who received intervention ranged between $15 per diabetic patient with specialized nursing to $10,665 per patient wth congestive heart failure receiving in-home care. Limitations Evidence used to inform estimates of effect was often limited to a single trial with limited generalizability across populations, interventions, and health care systems. Because of the low clinical fidelity of health administrative data sets, intermediate clinical outcomes could not be included. Cohort costs included an average of all health care costs and were not restricted to costs associated with the disease. Intervention costs were based on resource use specified in clinical trials. Conclusions Applying estimates of effect from the evidence-based analyses to real-world resource use resulted in cost savings for all interventions. On the basis of quality-of-life data identified in the literature, all interventions were found to result in a greater QALY gain than usual care would. Implementation of all interventions could offer significant cost reductions. However, this analysis was subject to important limitations. Plain Language Summary Chronic diseases are the leading cause of death and disability in Ontario. They account for a third of direct health care costs across the province. This study aims to evaluate the cost-effectiveness of health care interventions that might improve the management of chronic diseases. The evaluated interventions led to lower costs and better quality of life than usual care. Offering these options could reduce costs per patient. However, the studies used in this analysis were of medium to very low quality, and the methods had many limitations. PMID:24228076
Can economic evaluation in telemedicine be trusted? A systematic review of the literature
Bergmo, Trine S
2009-01-01
Background Telemedicine has been advocated as an effective means to provide health care services over a distance. Systematic information on costs and consequences has been called for to support decision-making in this field. This paper provides a review of the quality, validity and generalisability of economic evaluations in telemedicine. Methods A systematic literature search in all relevant databases was conducted and forms the basis for addressing these issues. Only articles published in peer-reviewed journals and written in English in the period from 1990 to 2007 were analysed. The literature search identified 33 economic evaluations where both costs (resource use) and outcomes (non-resource consequences) were measured. Results This review shows that economic evaluations in telemedicine are highly diverse in terms of both the study context and the methods applied. The articles covered several medical specialities ranging from cardiology and dermatology to psychiatry. The studies analysed telemedicine in home care, and in primary and secondary care settings using a variety of different technologies including videoconferencing, still-images and monitoring (store-and-forward telemedicine). Most studies used multiple outcome measures and analysed the effects using disaggregated cost-consequence frameworks. Objectives, study design, and choice of comparators were mostly well reported. The majority of the studies lacked information on perspective and costing method, few used general statistics and sensitivity analysis to assess validity, and even fewer used marginal analysis. Conclusion As this paper demonstrates, the majority of the economic evaluations reviewed were not in accordance with standard evaluation techniques. Further research is needed to explore the reasons for this and to address how economic evaluation in telemedicine best can take advantage of local constraints and at the same time produce valid and generalisable results. PMID:19852828
Study of quiet turbofan STOL aircraft for short-haul transportation. Volume 6: Systems analysis
NASA Technical Reports Server (NTRS)
1973-01-01
A systems analysis of the quiet turbofan aircraft for short-haul transportation was conducted. The purpose of the study was to integrate the representative data generated by aircraft, market, and economic analyses. Activities of the study were to develop the approach and to refine the methodologies for analytic tradeoff, and sensitivity studies of propulsive lift conceptual aircraft and their performance in simulated regional airlines. The operations of appropriate airlines in each of six geographic regions of the United States were simulated. The offshore domestic regions were evaluated to provide a complete domestic evaluation of the STOL concept applicability.
Fluorescence quencher improves SCANSYSTEM for rapid bacterial detection.
Schmidt, M; Hourfar, M K; Wahl, A; Nicol, S-B; Montag, T; Roth, W K; Seifried, E
2006-05-01
The optimized scansystem could detect contaminated platelet products within 24 h. However, the system's sensitivity was reduced by a high fluorescence background even in sterile samples, which led to the necessity of a well-trained staff for confirmation of microscope results. A new protocol of the optimized scansystem with the addition of a fluorescence quencher was evaluated. Pool platelet concentrates contaminated with five transfusion-relevant bacterial strains were tested in a blind study. In conjunction with new analysis software, the new quenching dye was able to reduce significantly unspecific background fluorescence. Sensitivity was best for Bacillus cereus and Escherichia coli (3 CFU/ml). The application of a fluorescence quencher enables automated discrimination of positive and negative test results in 60% of all analysed samples.
Flat tensile specimen design for advanced composites
NASA Technical Reports Server (NTRS)
Worthem, Dennis W.
1990-01-01
Finite element analyses of flat, reduced gage section tensile specimens with various transition region contours were performed. Within dimensional constraints, such as maximum length, tab region width, gage width, gage length, and minimum tab length, a transition contour radius of 41.9 cm produced the lowest stress values in the specimen transition region. The stresses in the transition region were not sensitive to specimen material properties. The stresses in the tab region were sensitive to specimen composite and/or tab material properties. An evaluation of stresses with different specimen composite and tab material combinations must account for material nonlinearity of both the tab and the specimen composite. Material nonlinearity can either relieve stresses in the composite under the tab or elevate them to cause failure under the tab.
Correlation coefficients of three self-perceived orthodontic treatment need indices.
Eslamipour, Faezeh; Riahi, Farnaz Tajmir; Etemadi, Milad; Riahi, Alireza
2017-01-01
To determine patient orthodontic treatment need, appropriate self-perceived indices are required. The aim of this study was to assess the sensitivity and specificity of esthetic component (AC) of the index of orthodontic treatment need (IOTN), oral esthetic subjective index scale (OASIS), and visual analog scale (VAS) through dental health component (DHC) IOTN as a normative index to determine the more appropriate self-perceived index among young adults. In this cross-sectional study, a sample of 993 was randomly selected from freshman students of Isfahan University. Those with a history of orthodontic treatment or current treatment were excluded. DHC was evaluated by two inter- and intra-calibrated examiners. Data for AC, OASIS, and VAS were collected through a questionnaire completed by students. Descriptive statistics, Mann-Whitney U-test, and Spearman correlation test, were used for data analyses. Sensitivity, specificity, positive and negative predictive values of self-perceived indices were calculated through DHC. Sensitivity of AC, OASIS, and VAS for evaluating definite orthodontic treatment need was calculated at 15.4%, 22.3%, and 44.6%, respectively. Specificity of these indices for evaluating definite orthodontic treatment need was calculated at 92.7%, 90.5%, and 76.2% percent, respectively. All self-perceived indices had a significant correlation with together and with DHC ( P < 0.01). Among demographic factors, there was weak but significant correlation only between mother's educational level and VAS ( P < 0.01). Due to the sensitivity and specificity of the three self-perceived indices, these indices are not recommended for population screening and should be used as adjuncts to a normative index for decision-making in orthodontic treatment planning.
ERIC Educational Resources Information Center
Arnau, Randolph C.; Broman-Fulks, Joshua J.; Green, Bradley A.; Berman, Mitchell E.
2009-01-01
The most commonly used measure of anxiety sensitivity is the 36-item Anxiety Sensitivity Index--Revised (ASI-R). Exploratory factor analyses have produced several different factors structures for the ASI-R, but an acceptable fit using confirmatory factor analytic approaches has only been found for a 21-item version of the instrument. We evaluated…
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R (2) values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes.
Picard-Meyer, Evelyne; Peytavin de Garam, Carine; Schereffer, Jean Luc; Marchal, Clotilde; Robardet, Emmanuelle; Cliquet, Florence
2015-01-01
This study evaluates the performance of five two-step SYBR Green RT-qPCR kits and five one-step SYBR Green qRT-PCR kits using real-time PCR assays. Two real-time thermocyclers showing different throughput capacities were used. The analysed performance evaluation criteria included the generation of standard curve, reaction efficiency, analytical sensitivity, intra- and interassay repeatability as well as the costs and the practicability of kits, and thermocycling times. We found that the optimised one-step PCR assays had a higher detection sensitivity than the optimised two-step assays regardless of the machine used, while no difference was detected in reaction efficiency, R 2 values, and intra- and interreproducibility between the two methods. The limit of detection at the 95% confidence level varied between 15 to 981 copies/µL and 41 to 171 for one-step kits and two-step kits, respectively. Of the ten kits tested, the most efficient kit was the Quantitect SYBR Green qRT-PCR with a limit of detection at 95% of confidence of 20 and 22 copies/µL on the thermocyclers Rotor gene Q MDx and MX3005P, respectively. The study demonstrated the pivotal influence of the thermocycler on PCR performance for the detection of rabies RNA, as well as that of the master mixes. PMID:25785274
NASA Technical Reports Server (NTRS)
Bair, E. K.
1986-01-01
The System Trades Study and Design Methodology Plan is used to conduct trade studies to define the combination of Space Shuttle Main Engine features that will optimize candidate engine configurations. This is accomplished by using vehicle sensitivities and engine parametric data to establish engine chamber pressure and area ratio design points for candidate engine configurations. Engineering analyses are to be conducted to refine and optimize the candidate configurations at their design points. The optimized engine data and characteristics are then evaluated and compared against other candidates being considered. The Evaluation Criteria Plan is then used to compare and rank the optimized engine configurations on the basis of cost.
Detection of bladder metabolic artifacts in (18)F-FDG PET imaging.
Roman-Jimenez, Geoffrey; Crevoisier, Renaud De; Leseur, Julie; Devillers, Anne; Ospina, Juan David; Simon, Antoine; Terve, Pierre; Acosta, Oscar
2016-04-01
Positron emission tomography using (18)F-fluorodeoxyglucose ((18)F-FDG-PET) is a widely used imaging modality in oncology. It enables significant functional information to be included in analyses of anatomical data provided by other image modalities. Although PET offers high sensitivity in detecting suspected malignant metabolism, (18)F-FDG uptake is not tumor-specific and can also be fixed in surrounding healthy tissue, which may consequently be mistaken as cancerous. PET analyses may be particularly hampered in pelvic-located cancers by the bladder׳s physiological uptake potentially obliterating the tumor uptake. In this paper, we propose a novel method for detecting (18)F-FDG bladder artifacts based on a multi-feature double-step classification approach. Using two manually defined seeds (tumor and bladder), the method consists of a semi-automated double-step clustering strategy that simultaneously takes into consideration standard uptake values (SUV) on PET, Hounsfield values on computed tomography (CT), and the distance to the seeds. This method was performed on 52 PET/CT images from patients treated for locally advanced cervical cancer. Manual delineations of the bladder on CT images were used in order to evaluate bladder uptake detection capability. Tumor preservation was evaluated using a manual segmentation of the tumor, with a threshold of 42% of the maximal uptake within the tumor. Robustness was assessed by randomly selecting different initial seeds. The classification averages were 0.94±0.09 for sensitivity, 0.98±0.01 specificity, and 0.98±0.01 accuracy. These results suggest that this method is able to detect most (18)F-FDG bladder metabolism artifacts while preserving tumor uptake, and could thus be used as a pre-processing step for further non-parasitized PET analyses. Copyright © 2016. Published by Elsevier Ltd.
Vemer, Pepijn; Rutten-van Mölken, Maureen P M H; Kaper, Janneke; Hoogenveen, Rudolf T; van Schayck, C P; Feenstra, Talitha L
2010-06-01
Smoking cessation can be encouraged by reimbursing the costs of smoking cessation support (SCS). The short-term efficiency of reimbursement has been evaluated previously. However, a thorough estimate of the long-term cost-utility is lacking. To evaluate long-term effects of reimbursement of SCS. Results from a randomized controlled trial were extrapolated to long-term outcomes in terms of health care costs and (quality adjusted) life years (QALY) gained, using the Chronic Disease Model. Our first scenario was no reimbursement. In a second scenario, the short-term cessation rates from the trial were extrapolated directly. Sensitivity analyses were based on the trial's confidence intervals. In the third scenario the additional use of SCS as found in the trial was combined with cessation rates from international meta-analyses. Intervention costs per QALY gained compared to the reference scenario were approximately euro1200 extrapolating the trial effects directly, and euro4200 when combining the trial's use of SCS with the cessation rates from the literature. Taking all health care effects into account, even costs in life years gained, resulted in an estimated incremental cost-utility of euro4500 and euro7400, respectively. In both scenarios costs per QALY remained below euro16 000 in sensitivity analyses using a life-time horizon. Extrapolating the higher use of SCS due to reimbursement led to more successful quitters and a gain in life years and QALYs. Accounting for overheads, administration costs and the costs of SCS, these health gains could be obtained at relatively low cost, even when including costs in life years gained. Hence, reimbursement of SCS seems to be cost-effective from a health care perspective.
Moriwaki, K; Noto, S
2017-02-01
A model-based cost-effectiveness analysis was performed to evaluate the cost-effectiveness of secondary fracture prevention by osteoporosis liaison service (OLS) relative to no therapy in patients with osteoporosis and a history of hip fracture. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. The purpose of this study was to estimate, from the perspective of Japan's healthcare system, the cost-effectiveness of secondary fracture prevention by OLS relative to no therapy in patients with osteoporosis and a history of hip fracture. A patient-level state transition model was developed to predict lifetime costs and quality-adjusted life years (QALYs) in patients with or without secondary fracture prevention by OLS. The incremental cost-effectiveness ratio (ICER) of secondary fracture prevention compared with no therapy was estimated. Sensitivity analyses were performed to examine the influence of parameter uncertainty on the base case results. Compared with no therapy, secondary fracture prevention in patients aged 65 with T-score of -2.5 resulted in an additional lifetime cost of $3396 per person and conferred an additional 0.118 QALY, resulting in an ICER of $28,880 per QALY gained. Deterministic sensitivity analyses showed that treatment duration and offset time strongly affect the cost-effectiveness of OLS. According to the results of scenario analyses, secondary fracture prevention by OLS was cost-saving compared with no therapy in patients with a family history of hip fracture and high alcohol intake. Secondary fracture prevention by OLS is cost-effective in Japanese women with osteoporosis who have suffered a hip fracture. In addition, secondary fracture prevention is less expensive than no therapy in high-risk patients with multiple risk factors.
Sud, Sachin; Mittmann, Nicole; Cook, Deborah J; Geerts, William; Chan, Brian; Dodek, Peter; Gould, Michael K; Guyatt, Gordon; Arabi, Yaseen; Fowler, Robert A
2011-12-01
Venous thromboembolism is difficult to diagnose in critically ill patients and may increase morbidity and mortality. To evaluate the cost-effectiveness of strategies to reduce morbidity from venous thromboembolism in critically ill patients. A Markov decision analytic model to compare weekly compression ultrasound screening (screening) plus investigation for clinically suspected deep vein thrombosis (DVT) (case finding) versus case finding alone; and a hypothetical program to increase adherence to DVT prevention. Probabilities were derived from a systematic review of venous thromboembolism in medical-surgical intensive care unit patients. Costs (in 2010 $US) were obtained from hospitals in Canada, Australia, and the United States, and the medical literature. Analyses were conducted from a societal perspective over a lifetime horizon. Outcomes included costs, quality-adjusted life-years (QALY), and incremental cost-effectiveness ratios. In the base case, the rate of proximal DVT was 85 per 1,000 patients. Screening resulted in three fewer pulmonary emboli than case-finding alone but also two additional bleeding episodes, and cost $223,801 per QALY gained. In sensitivity analyses, screening cost less than $50,000 per QALY only if the probability of proximal DVT increased from a baseline of 8.5-16%. By comparison, increasing adherence to appropriate pharmacologic thromboprophylaxis by 10% resulted in 16 fewer DVTs, one fewer pulmonary emboli, and one additional heparin-induced thrombocytopenia and bleeding event, and cost $27,953 per QALY gained. Programs achieving increased adherence to best-practice venous thromboembolism prevention were cost-effective over a wide range of program costs and were robust in probabilistic sensitivity analyses. Appropriate prophylaxis provides better value in terms of costs and health gains than routine screening for DVT. Resources should be targeted at optimizing thromboprophylaxis.
Lionetti, Francesca; Aron, Arthur; Aron, Elaine N; Burns, G Leonard; Jagiellowicz, Jadzia; Pluess, Michael
2018-01-22
According to empirical studies and recent theories, people differ substantially in their reactivity or sensitivity to environmental influences with some being generally more affected than others. More sensitive individuals have been described as orchids and less-sensitive ones as dandelions. Applying a data-driven approach, we explored the existence of sensitivity groups in a sample of 906 adults who completed the highly sensitive person (HSP) scale. According to factor analyses, the HSP scale reflects a bifactor model with a general sensitivity factor. In contrast to prevailing theories, latent class analyses consistently suggested the existence of three rather than two groups. While we were able to identify a highly sensitive (orchids, 31%) and a low-sensitive group (dandelions, 29%), we also detected a third group (40%) characterised by medium sensitivity, which we refer to as tulips in keeping with the flower metaphor. Preliminary cut-off scores for all three groups are provided. In order to characterise the different sensitivity groups, we investigated group differences regarding the Big Five personality traits, as well as experimentally assessed emotional reactivity in an additional independent sample. According to these follow-up analyses, the three groups differed in neuroticism, extraversion and emotional reactivity to positive mood induction with orchids scoring significantly higher in neuroticism and emotional reactivity and lower in extraversion than the other two groups (dandelions also differed significantly from tulips). Findings suggest that environmental sensitivity is a continuous and normally distributed trait but that people fall into three distinct sensitive groups along a sensitivity continuum.
Liu, C Carrie; Jethwa, Ashok R; Khariwala, Samir S; Johnson, Jonas; Shin, Jennifer J
2016-01-01
(1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of nondiagnostic and indeterminate cytology with parotid FNA. Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I(2) statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I(2) point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509-0.982) and a specificity of 0.995 (95% CI, 0.960-0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030-0.075) and 0.147 (95% CI, 0.106-0.188), respectively. FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. © American Academy of Otolaryngology-Head and Neck Surgery Foundation 2015.
Simoneau, Gabrielle; Levis, Brooke; Cuijpers, Pim; Ioannidis, John P A; Patten, Scott B; Shrier, Ian; Bombardier, Charles H; de Lima Osório, Flavia; Fann, Jesse R; Gjerdingen, Dwenda; Lamers, Femke; Lotrakul, Manote; Löwe, Bernd; Shaaban, Juwita; Stafford, Lesley; van Weert, Henk C P M; Whooley, Mary A; Wittkampf, Karin A; Yeung, Albert S; Thombs, Brett D; Benedetti, Andrea
2017-11-01
Individual patient data (IPD) meta-analyses are increasingly common in the literature. In the context of estimating the diagnostic accuracy of ordinal or semi-continuous scale tests, sensitivity and specificity are often reported for a given threshold or a small set of thresholds, and a meta-analysis is conducted via a bivariate approach to account for their correlation. When IPD are available, sensitivity and specificity can be pooled for every possible threshold. Our objective was to compare the bivariate approach, which can be applied separately at every threshold, to two multivariate methods: the ordinal multivariate random-effects model and the Poisson correlated gamma-frailty model. Our comparison was empirical, using IPD from 13 studies that evaluated the diagnostic accuracy of the 9-item Patient Health Questionnaire depression screening tool, and included simulations. The empirical comparison showed that the implementation of the two multivariate methods is more laborious in terms of computational time and sensitivity to user-supplied values compared to the bivariate approach. Simulations showed that ignoring the within-study correlation of sensitivity and specificity across thresholds did not worsen inferences with the bivariate approach compared to the Poisson model. The ordinal approach was not suitable for simulations because the model was highly sensitive to user-supplied starting values. We tentatively recommend the bivariate approach rather than more complex multivariate methods for IPD diagnostic accuracy meta-analyses of ordinal scale tests, although the limited type of diagnostic data considered in the simulation study restricts the generalization of our findings. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Liu, C. Carrie; Jethwa, Ashok R.; Khariwala, Samir S.; Johnson, Jonas; Shin, Jennifer J.
2016-01-01
Objectives (1) To analyze the sensitivity and specificity of fine-needle aspiration (FNA) in distinguishing benign from malignant parotid disease. (2) To determine the anticipated posttest probability of malignancy and probability of non-diagnostic and indeterminate cytology with parotid FNA. Data Sources Independently corroborated computerized searches of PubMed, Embase, and Cochrane Central Register were performed. These were supplemented with manual searches and input from content experts. Review Methods Inclusion/exclusion criteria specified diagnosis of parotid mass, intervention with both FNA and surgical excision, and enumeration of both cytologic and surgical histopathologic results. The primary outcomes were sensitivity, specificity, and posttest probability of malignancy. Heterogeneity was evaluated with the I2 statistic. Meta-analysis was performed via a 2-level mixed logistic regression model. Bayesian nomograms were plotted via pooled likelihood ratios. Results The systematic review yielded 70 criterion-meeting studies, 63 of which contained data that allowed for computation of numerical outcomes (n = 5647 patients; level 2a) and consideration of meta-analysis. Subgroup analyses were performed in studies that were prospective, involved consecutive patients, described the FNA technique utilized, and used ultrasound guidance. The I2 point estimate was >70% for all analyses, except within prospectively obtained and ultrasound-guided results. Among the prospective subgroup, the pooled analysis demonstrated a sensitivity of 0.882 (95% confidence interval [95% CI], 0.509–0.982) and a specificity of 0.995 (95% CI, 0.960–0.999). The probabilities of nondiagnostic and indeterminate cytology were 0.053 (95% CI, 0.030–0.075) and 0.147 (95% CI, 0.106–0.188), respectively. Conclusion FNA has moderate sensitivity and high specificity in differentiating malignant from benign parotid lesions. Considerable heterogeneity is present among studies. PMID:26428476
Van Limbergen, J; Kalima, P; Taheri, S; Beattie, T F
2006-01-01
Rapid streptococcal tests (RSTs) for streptococcal pharyngitis have made diagnosis at once simpler and more complicated. The American Academy of Pediatrics recommends that all RSTs be confirmed by a follow up throat culture unless local validation has proved the RST to be equally sensitive. To evaluate (a) RST as a single diagnostic tool, compared with RST with or without throat culture; (b) clinical diagnosis and the relative contribution of different symptoms. The study included 213 patients with clinical signs of pharyngitis. Throat swabs were analysed using Quickvue+ Strep A Test; negative RSTs were backed up by throat culture. Thirteen clinical features commonly associated with strep throat were analysed using backward stepwise logistic regression. Positive results (RST or throat culture) were obtained in 33 patients; RST correctly identified 21. Eleven samples were false negative on RST. At a strep throat prevalence of 15.9%, sensitivity of RST was 65.6% (95% CI 46.8% to 81.4%) and specificity 99.4% (96.7% to 99.9%). Sensitivity of clinical diagnosis alone was 57% (34% to 78%) and specificity 71% (61% to 80%). Clinically, only history of sore throat, rash, and pyrexia contributed to the diagnosis of strep throat (p<0.05). The high specificity of RST facilitates early diagnosis of strep throat. However, the low sensitivity of RST does not support its use as a single diagnostic tool. The sensitivity in the present study is markedly different from that reported by the manufacturer. Clinical examination is of limited value in the diagnosis of strep throat. It is important to audit the performance of new diagnostic tests, previously validated in different settings.
Van Limbergen, J; Kalima, P; Taheri, S; Beattie, T F
2006-01-01
Background Rapid streptococcal tests (RSTs) for streptococcal pharyngitis have made diagnosis at once simpler and more complicated. The American Academy of Pediatrics recommends that all RSTs be confirmed by a follow up throat culture unless local validation has proved the RST to be equally sensitive. Aims To evaluate (a) RST as a single diagnostic tool, compared with RST with or without throat culture; (b) clinical diagnosis and the relative contribution of different symptoms. Methods The study included 213 patients with clinical signs of pharyngitis. Throat swabs were analysed using Quickvue+ Strep A Test; negative RSTs were backed up by throat culture. Thirteen clinical features commonly associated with strep throat were analysed using backward stepwise logistic regression. Results Positive results (RST or throat culture) were obtained in 33 patients; RST correctly identified 21. Eleven samples were false negative on RST. At a strep throat prevalence of 15.9%, sensitivity of RST was 65.6% (95% CI 46.8% to 81.4%) and specificity 99.4% (96.7% to 99.9%). Sensitivity of clinical diagnosis alone was 57% (34% to 78%) and specificity 71% (61% to 80%). Clinically, only history of sore throat, rash, and pyrexia contributed to the diagnosis of strep throat (p<0.05). Conclusion The high specificity of RST facilitates early diagnosis of strep throat. However, the low sensitivity of RST does not support its use as a single diagnostic tool. The sensitivity in the present study is markedly different from that reported by the manufacturer. Clinical examination is of limited value in the diagnosis of strep throat. It is important to audit the performance of new diagnostic tests, previously validated in different settings. PMID:16373800
Analysis of Consumers' Preferences and Price Sensitivity to Native Chickens.
Lee, Min-A; Jung, Yoojin; Jo, Cheorun; Park, Ji-Young; Nam, Ki-Chang
2017-01-01
This study analyzed consumers' preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens.
Analysis of Consumers’ Preferences and Price Sensitivity to Native Chickens
Lee, Min-A; Jung, Yoojin; Jo, Cheorun
2017-01-01
This study analyzed consumers’ preferences and price sensitivity to native chickens. A survey was conducted from Jan 6 to 17, 2014, and data were collected from consumers (n=500) living in Korea. Statistical analyses evaluated the consumption patterns of native chickens, preference marketing for native chicken breeds which will be newly developed, and price sensitivity measurement (PSM). Of the subjects who preferred broilers, 24.3% do not purchase native chickens because of the dryness and tough texture, while those who preferred native chickens liked their chewy texture (38.2%). Of the total subjects, 38.2% preferred fried native chickens (38.2%) for processed food, 38.4% preferred direct sales for native chicken distribution, 51.0% preferred native chickens to be slaughtered in specialty stores, and 32.4% wanted easy access to native chickens. Additionally, the price stress range (PSR) was 50 won and the point of marginal cheapness (PMC) and point of marginal expensiveness (PME) were 6,980 won and 12,300 won, respectively. Evaluation of the segmentation market revealed that consumers who prefer broiler to native chicken breeds were more sensitive to the chicken price. To accelerate the consumption of newly developed native chicken meat, it is necessary to develop a texture that each consumer needs, to increase the accessibility of native chickens, and to have diverse menus and recipes as well as reasonable pricing for native chickens. PMID:28747834
Analysis of beryllium and depleted uranium: An overview of detection methods in aerosols and soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Camins, I.; Shinn, J.H.
We conducted a survey of commercially available methods for analysis of beryllium and depleted uranium in aerosols and soils to find a reliable, cost-effective, and sufficiently precise method for researchers involved in environmental testing at the Yuma Proving Ground, Yuma, Arizona. Criteria used for evaluation include cost, method of analysis, specificity, sensitivity, reproducibility, applicability, and commercial availability. We found that atomic absorption spectrometry with graphite furnace meets these criteria for testing samples for beryllium. We found that this method can also be used to test samples for depleted uranium. However, atomic absorption with graphite furnace is not as sensitive amore » measurement method for depleted uranium as it is for beryllium, so we recommend that quality control of depleted uranium analysis be maintained by testing 10 of every 1000 samples by neutron activation analysis. We also evaluated 45 companies and institutions that provide analyses of beryllium and depleted uranium. 5 refs., 1 tab.« less
Economic evaluation of algae biodiesel based on meta-analyses
NASA Astrophysics Data System (ADS)
Zhang, Yongli; Liu, Xiaowei; White, Mark A.; Colosi, Lisa M.
2017-08-01
The objective of this study is to elucidate the economic viability of algae-to-energy systems at a large scale, by developing a meta-analysis of five previously published economic evaluations of systems producing algae biodiesel. Data from original studies were harmonised into a standardised framework using financial and technical assumptions. Results suggest that the selling price of algae biodiesel under the base case would be 5.00-10.31/gal, higher than the selected benchmarks: 3.77/gal for petroleum diesel, and 4.21/gal for commercial biodiesel (B100) from conventional vegetable oil or animal fat. However, the projected selling price of algal biodiesel (2.76-4.92/gal), following anticipated improvements, would be competitive. A scenario-based sensitivity analysis reveals that the price of algae biodiesel is most sensitive to algae biomass productivity, algae oil content, and algae cultivation cost. This indicates that the improvements in the yield, quality, and cost of algae feedstock could be the key factors to make algae-derived biodiesel economically viable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Derewonko, H.; Bosella, A.; Pataut, G.
1996-06-01
An evaluation program of Thomson CSF-TCS GaAs low noise and power MMIC technologies to 1 MeV equivalent neutron fluence levels, up to 1 {times} 10{sup 15} n/cm{sup 2}, ionizing 1.17--1.33 MeV CO{sup 60} dose levels in excess of 200 Mrad(GaAs) and dose rate levels reaching 1.89 {times} 10{sup 11} rad(GaAs)/s is presented in terms of proper components and parameter choices, DC/RF electrical measurements and test methods under irradiation. Experimental results are explained together with drift analyses of electrical parameters that have determined threshold limits of component degradations. Modelling the effects of radiation on GaAs components relies on degradation analysis ofmore » active layer which appears to be the most sensitive factor. MMICs degradation under neutron fluence was simulated from irradiated FET data. Finally, based on sensitivity of technological parameters, rad-hard design including material, technology and MMIC design enhancement is discussed.« less
Mathes, Tim; Jacobs, Esther; Morfeld, Jana-Carina; Pieper, Dawid
2013-09-30
The number of Health Technology Assessment (HTA) agencies increases. One component of HTAs are economic aspects. To incorporate economic aspects commonly economic evaluations are performed. A convergence of recommendations for methods of health economic evaluations between international HTA agencies would facilitate the adaption of results to different settings and avoid unnecessary expense. A first step in this direction is a detailed analysis of existing similarities and differences in recommendations to identify potential for harmonization. The objective is to provide an overview and comparison of the methodological recommendations of international HTA agencies for economic evaluations. The webpages of 127 international HTA agencies were searched for guidelines containing recommendations on methods for the preparation of economic evaluations. Additionally, the HTA agencies were requested information on methods for economic evaluations. Recommendations of the included guidelines were extracted in standardized tables according to 13 methodological aspects. All process steps were performed independently by two reviewers. Finally 25 publications of 14 HTA agencies were included in the analysis. Methods for economic evaluations vary widely. The greatest accordance could be found for the type of analysis and comparator. Cost-utility-analyses or cost-effectiveness-analyses are recommended. The comparator should continuously be usual care. Again the greatest differences were shown in the recommendations on the measurement/sources of effects, discounting and in the analysis of sensitivity. The main difference regarding effects is the focus either on efficacy or effectiveness. Recommended discounting rates range from 1.5%-5% for effects and 3%-5% for costs whereby it is mostly recommended to use the same rate for costs and effects. With respect to the analysis of sensitivity the main difference is that oftentimes the probabilistic or deterministic approach is recommended exclusively. Methods for modeling are only described vaguely and mainly with the rational that the "appropriate model" depends on the decision problem. Considering all other aspects a comparison is challenging as recommendations vary regarding detailedness and addressed issues. There is a considerable unexplainable variance in recommendations. Further effort is needed to harmonize methods for preparing economic evaluations.
Economic evaluation of ezetimibe treatment in combination with statin therapy in the United States.
Davies, Glenn M; Vyas, Ami; Baxter, Carl A
2017-07-01
This study assessed the cost-effectiveness of ezetimibe with statin therapy vs statin monotherapy from a US payer perspective, assuming the impending patent expiration of ezetimibe. A Markov-like economic model consisting of 28 distinct health states was used. Model population data were obtained from US linked claims and electronic medical records, with inclusion criteria based on diagnostic guidelines. Inputs came from recent clinical trials, meta-analyses, and cost-effectiveness analyses. The base-case scenario was used to evaluate the cost-effectiveness of adding ezetimibe 10 mg to statin in patients aged 35-74 years with a history of coronary heart disease (CHD) and/or stroke, and with low-density lipoprotein cholesterol (LDL-C) levels ≥70 mg/dL over a lifetime horizon, assuming a 90% price reduction of ezetimibe after 1 year to take into account the impending patent expiration in the second quarter of 2017. Sub-group analyses included patients with LDL-C levels ≥100 mg/dL and patients with diabetes with LDL-C levels ≥70 mg/dL. The lifetime discounted incremental cost-effectiveness ratio (ICER) for ezetimibe added to statin was $9,149 per quality-adjusted life year (QALY) for the base-case scenario. For patients with LDL-C levels ≥100 mg/dL, the ICER was $839/QALY; for those with diabetes and LDL-C levels ≥70 mg/dL, it was $560/QALY. One-way sensitivity analyses showed that the model was sensitive to changes in cost of ezetimibe, rate reduction of non-fatal CHD, and utility weight for non-fatal CHD in the base-case and sub-group analyses. Indirect costs or treatment discontinuation estimation were not included. Compared with statin monotherapy, ezetimibe with statin therapy was cost-effective for secondary prevention of CHD and stroke and for primary prevention of these conditions in patients whose LDL-C levels are ≥100 mg/dL and in patients with diabetes, taking into account a 90% cost reduction for ezetimibe.
Goutier, Wouter; Kloeze, Margreet; McCreary, Andrew C
2016-03-01
There are a number of approved therapeutics for the management of alcohol dependence, which might also convey the potential as smoking cessation aids. The present study investigated the effect of a few of these therapeutics and potential candidates (non-peptide vasopressin V1b antagonists) on the expression of nicotine-induced behavioral sensitization in Wistar rats. The following compounds were included in this evaluation: rimonabant, bupropion, topiramate, acamprosate, naltrexone, mecamylamine, nelivaptan (SSR-149415, V1b antagonist) and two novel V1b antagonists. Following the development of nicotine-induced locomotor sensitization and a withdrawal period, the expression of sensitization was assessed in the presence of one of the examined agents given 30 minutes prior to the nicotine challenge injection. Acamprosate, naltrexone, rimonabant, mecamylamine, nelivaptan and V1b antagonist 'compound 2' significantly antagonized the expression of nicotine-induced sensitization. Whereas topiramate showed a trend for effects, the V1b antagonist 'compound 1' did not show any significant effects. Bupropion failed to block sensitization but increased activity alone and was therefore tested in development and cross-sensitization studies. Taken together, these findings provide pre-clinical evidence that these molecules attenuated the expression of nicotine-induced sensitization and should be further investigated as putative treatments for nicotine addiction. Moreover, V1b antagonists should be further investigated as a potential novel smoking cessation aid. © 2014 Society for the Study of Addiction.
Jain, Siddharth; Kilgore, Meredith; Edwards, Rodney K; Owen, John
2016-07-01
Preterm birth (PTB) is a significant cause of neonatal morbidity and mortality. Studies have shown that vaginal progesterone therapy for women diagnosed with shortened cervical length can reduce the risk of PTB. However, published cost-effectiveness analyses of vaginal progesterone for short cervix have not considered an appropriate range of clinically important parameters. To evaluate the cost-effectiveness of universal cervical length screening in women without a history of spontaneous PTB, assuming that all women with shortened cervical length receive progesterone to reduce the likelihood of PTB. A decision analysis model was developed to compare universal screening and no-screening strategies. The primary outcome was the cost-effectiveness ratio of both the strategies, defined as the estimated patient cost per quality-adjusted life-year (QALY) realized by the children. One-way sensitivity analyses were performed by varying progesterone efficacy to prevent PTB. A probabilistic sensitivity analysis was performed to address uncertainties in model parameter estimates. In our base-case analysis, assuming that progesterone reduces the likelihood of PTB by 11%, the incremental cost-effectiveness ratio for screening was $158,000/QALY. Sensitivity analyses show that these results are highly sensitive to the presumed efficacy of progesterone to prevent PTB. In a 1-way sensitivity analysis, screening results in cost-saving if progesterone can reduce PTB by 36%. Additionally, for screening to be cost-effective at WTP=$60,000 in three clinical scenarios, progesterone therapy has to reduce PTB by 60%, 34% and 93%. Screening is never cost-saving in the worst-case scenario or when serial ultrasounds are employed, but could be cost-saving with a two-day hospitalization only if progesterone were 64% effective. Cervical length screening and treatment with progesterone is a not a dominant, cost-effective strategy unless progesterone is more effective than has been suggested by available data for US women. Until future trials demonstrate greater progesterone efficacy, and effectiveness studies confirm a benefit from screening and treatment, the cost-effectiveness of universal cervical length screening in the United States remains questionable. Copyright © 2016 Elsevier Inc. All rights reserved.
Seasonal Influenza Vaccination for Children in Thailand: A Cost-Effectiveness Analysis
Meeyai, Aronrag; Praditsitthikorn, Naiyana; Kotirum, Surachai; Kulpeng, Wantanee; Putthasri, Weerasak; Cooper, Ben S.; Teerawattananon, Yot
2015-01-01
Background Seasonal influenza is a major cause of mortality worldwide. Routine immunization of children has the potential to reduce this mortality through both direct and indirect protection, but has not been adopted by any low- or middle-income countries. We developed a framework to evaluate the cost-effectiveness of influenza vaccination policies in developing countries and used it to consider annual vaccination of school- and preschool-aged children with either trivalent inactivated influenza vaccine (TIV) or trivalent live-attenuated influenza vaccine (LAIV) in Thailand. We also compared these approaches with a policy of expanding TIV coverage in the elderly. Methods and Findings We developed an age-structured model to evaluate the cost-effectiveness of eight vaccination policies parameterized using country-level data from Thailand. For policies using LAIV, we considered five different age groups of children to vaccinate. We adopted a Bayesian evidence-synthesis framework, expressing uncertainty in parameters through probability distributions derived by fitting the model to prospectively collected laboratory-confirmed influenza data from 2005-2009, by meta-analysis of clinical trial data, and by using prior probability distributions derived from literature review and elicitation of expert opinion. We performed sensitivity analyses using alternative assumptions about prior immunity, contact patterns between age groups, the proportion of infections that are symptomatic, cost per unit vaccine, and vaccine effectiveness. Vaccination of children with LAIV was found to be highly cost-effective, with incremental cost-effectiveness ratios between about 2,000 and 5,000 international dollars per disability-adjusted life year averted, and was consistently preferred to TIV-based policies. These findings were robust to extensive sensitivity analyses. The optimal age group to vaccinate with LAIV, however, was sensitive both to the willingness to pay for health benefits and to assumptions about contact patterns between age groups. Conclusions Vaccinating school-aged children with LAIV is likely to be cost-effective in Thailand in the short term, though the long-term consequences of such a policy cannot be reliably predicted given current knowledge of influenza epidemiology and immunology. Our work provides a coherent framework that can be used for similar analyses in other low- and middle-income countries. PMID:26011712
Seasonal influenza vaccination for children in Thailand: a cost-effectiveness analysis.
Meeyai, Aronrag; Praditsitthikorn, Naiyana; Kotirum, Surachai; Kulpeng, Wantanee; Putthasri, Weerasak; Cooper, Ben S; Teerawattananon, Yot
2015-05-01
Seasonal influenza is a major cause of mortality worldwide. Routine immunization of children has the potential to reduce this mortality through both direct and indirect protection, but has not been adopted by any low- or middle-income countries. We developed a framework to evaluate the cost-effectiveness of influenza vaccination policies in developing countries and used it to consider annual vaccination of school- and preschool-aged children with either trivalent inactivated influenza vaccine (TIV) or trivalent live-attenuated influenza vaccine (LAIV) in Thailand. We also compared these approaches with a policy of expanding TIV coverage in the elderly. We developed an age-structured model to evaluate the cost-effectiveness of eight vaccination policies parameterized using country-level data from Thailand. For policies using LAIV, we considered five different age groups of children to vaccinate. We adopted a Bayesian evidence-synthesis framework, expressing uncertainty in parameters through probability distributions derived by fitting the model to prospectively collected laboratory-confirmed influenza data from 2005-2009, by meta-analysis of clinical trial data, and by using prior probability distributions derived from literature review and elicitation of expert opinion. We performed sensitivity analyses using alternative assumptions about prior immunity, contact patterns between age groups, the proportion of infections that are symptomatic, cost per unit vaccine, and vaccine effectiveness. Vaccination of children with LAIV was found to be highly cost-effective, with incremental cost-effectiveness ratios between about 2,000 and 5,000 international dollars per disability-adjusted life year averted, and was consistently preferred to TIV-based policies. These findings were robust to extensive sensitivity analyses. The optimal age group to vaccinate with LAIV, however, was sensitive both to the willingness to pay for health benefits and to assumptions about contact patterns between age groups. Vaccinating school-aged children with LAIV is likely to be cost-effective in Thailand in the short term, though the long-term consequences of such a policy cannot be reliably predicted given current knowledge of influenza epidemiology and immunology. Our work provides a coherent framework that can be used for similar analyses in other low- and middle-income countries.
Huang, Huan; Taylor, Douglas C A; Carson, Robyn T; Sarocco, Phil; Friedman, Mark; Munsell, Michael; Blum, Steven I; Menzin, Joseph
2015-04-01
To use techniques of decision-analytic modeling to evaluate the effectiveness and costs of linaclotide vs lubiprostone in the treatment of adult patients with irritable bowel syndrome with constipation (IBS-C). Using model inputs derived from published literature, linaclotide Phase III trial data and a physician survey, a decision-tree model was constructed. Response to therapy was defined as (1) a ≥ 14-point increase from baseline in IBS-Quality-of-Life (IBS-QoL) questionnaire overall score at week 12 or (2) one of the top two responses (moderately/significantly relieved) on a 7-point IBS symptom relief question in ≥ 2 of 3 months. Patients who do not respond to therapy are assumed to fail therapy and accrue costs associated with a treatment failure. Model time horizon is aligned with clinical trial duration of 12 weeks. Model outputs include number of responders, quality-adjusted life-years (QALYs), and total costs (including direct and indirect). Both one-way and probabilistic sensitivity analyses were conducted. Treatment for IBS-C with linaclotide produced more responders than lubiprostone for both response definitions (19.3% vs 13.0% and 61.8% vs 57.2% for IBS-QoL and symptom relief, respectively), lower per-patient costs ($803 vs $911 and $977 vs $1056), and higher QALYs (0.1921 vs 0.1917 and 0.1909 vs 0.1894) over the 12-week time horizon. Results were similar for most one-way sensitivity analyses. In probabilistic sensitivity analyses, the majority of simulations resulted in linaclotide having higher treatment response rates and lower per-patient costs. There are no available head-to-head trials that compare linaclotide with lubiprostone; therefore, placebo-adjusted estimates of relative efficacy were derived for model inputs. The time horizon for this model is relatively short, as it was limited to the duration of available clinical trial data. Linaclotide was found to be a less costly option vs lubiprostone for the treatment of adult patients with IBS-C.
Quantitative aspects of inductively coupled plasma mass spectrometry
Wagner, Barbara
2016-01-01
Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644971
Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.
Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi
2017-05-01
Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Stockton, T. B.; Black, P. K.; Catlett, K. M.; Tauxe, J. D.
2002-05-01
Environmental modeling is an essential component in the evaluation of regulatory compliance of radioactive waste management sites (RWMSs) at the Nevada Test Site in southern Nevada, USA. For those sites that are currently operating, further goals are to support integrated decision analysis for the development of acceptance criteria for future wastes, as well as site maintenance, closure, and monitoring. At these RWMSs, the principal pathways for release of contamination to the environment are upward towards the ground surface rather than downwards towards the deep water table. Biotic processes, such as burrow excavation and plant uptake and turnover, dominate this upward transport. A combined multi-pathway contaminant transport and risk assessment model was constructed using the GoldSim modeling platform. This platform facilitates probabilistic analysis of environmental systems, and is especially well suited for assessments involving radionuclide decay chains. The model employs probabilistic definitions of key parameters governing contaminant transport, with the goals of quantifying cumulative uncertainty in the estimation of performance measures and providing information necessary to perform sensitivity analyses. This modeling differs from previous radiological performance assessments (PAs) in that the modeling parameters are intended to be representative of the current knowledge, and the uncertainty in that knowledge, of parameter values rather than reflective of a conservative assessment approach. While a conservative PA may be sufficient to demonstrate regulatory compliance, a parametrically honest PA can also be used for more general site decision-making. In particular, a parametrically honest probabilistic modeling approach allows both uncertainty and sensitivity analyses to be explicitly coupled to the decision framework using a single set of model realizations. For example, sensitivity analysis provides a guide for analyzing the value of collecting more information by quantifying the relative importance of each input parameter in predicting the model response. However, in these complex, high dimensional eco-system models, represented by the RWMS model, the dynamics of the systems can act in a non-linear manner. Quantitatively assessing the importance of input variables becomes more difficult as the dimensionality, the non-linearities, and the non-monotonicities of the model increase. Methods from data mining such as Multivariate Adaptive Regression Splines (MARS) and the Fourier Amplitude Sensitivity Test (FAST) provide tools that can be used in global sensitivity analysis in these high dimensional, non-linear situations. The enhanced interpretability of model output provided by the quantitative measures estimated by these global sensitivity analysis tools will be demonstrated using the RWMS model.
Preliminary post-emplacement safety analysis of the subseabed disposal of high-level nuclear waste
NASA Astrophysics Data System (ADS)
Kaplan, M. F.; Koplik, C. M.; Klett, R. D.
1984-09-01
The radiological hazard from the disposal of high-level nuclear waste within the deep ocean sediments is evaluated, on a preliminary basis, for locations in the central North Pacific and in the northwestern Atlantic. Radio-nuclide transport in the sediment and water column and by marine organisms is considered. Peak doses to an individual are approximately five orders of magnitude below background levels for both sites. Sensitivity analyses for most aspects of the post-emplacement systems models are included.
Space shuttle navigation analysis
NASA Technical Reports Server (NTRS)
Jones, H. L.; Luders, G.; Matchett, G. A.; Sciabarrasi, J. E.
1976-01-01
A detailed analysis of space shuttle navigation for each of the major mission phases is presented. A covariance analysis program for prelaunch IMU calibration and alignment for the orbital flight tests (OFT) is described, and a partial error budget is presented. The ascent, orbital operations and deorbit maneuver study considered GPS-aided inertial navigation in the Phase III GPS (1984+) time frame. The entry and landing study evaluated navigation performance for the OFT baseline system. Detailed error budgets and sensitivity analyses are provided for both the ascent and entry studies.
Easily constructed spectroelectrochemical cell for batch and flow injection analyses.
Flowers, Paul A; Maynor, Margaret A; Owens, Donald E
2002-02-01
The design and performance of an easily constructed spectroelectrochemical cell suitable for batch and flow injection measurements are described. The cell is fabricated from a commercially available 5-mm quartz cuvette and employs 60 ppi reticulated vitreous carbon as the working electrode, resulting in a reasonable compromise between optical sensitivity and thin-layer electrochemical behavior. The spectroelectrochemical traits of the cell in both batch and flow modes were evaluated using aqueous ferricyanide and compare favorably to those reported previously for similar cells.
Jahn, I; Foraita, R
2008-01-01
In Germany gender-sensitive approaches are part of guidelines for good epidemiological practice as well as health reporting. They are increasingly claimed to realize the gender mainstreaming strategy in research funding by the federation and federal states. This paper focuses on methodological aspects of data analysis, as an empirical data example of which serves the health report of Bremen, a population-based cross-sectional study. Health reporting requires analysis and reporting methods that are able to discover sex/gender issues of questions, on the one hand, and consider how results can adequately be communicated, on the other hand. The core question is: Which consequences do a different inclusion of the category sex in different statistical analyses for identification of potential target groups have on the results? As evaluation methods logistic regressions as well as a two-stage procedure were exploratively conducted. This procedure combines graphical models with CHAID decision trees and allows for visualising complex results. Both methods are analysed by stratification as well as adjusted by sex/gender and compared with each other. As a result, only stratified analyses are able to detect differences between the sexes and within the sex/gender groups as long as one cannot resort to previous knowledge. Adjusted analyses can detect sex/gender differences only if interaction terms have been included in the model. Results are discussed from a statistical-epidemiological perspective as well as in the context of health reporting. As a conclusion, the question, if a statistical method is gender-sensitive, can only be answered by having concrete research questions and known conditions. Often, an appropriate statistic procedure can be chosen after conducting a separate analysis for women and men. Future gender studies deserve innovative study designs as well as conceptual distinctiveness with regard to the biological and the sociocultural elements of the category sex/gender.
Hsu, Justine; Zinsou, Cyprien; Parkhurst, Justin; N'Dour, Marguerite; Foyet, Léger; Mueller, Dirk H
2013-01-01
Behavioural interventions have been widely integrated in HIV/AIDS social marketing prevention strategies and are considered valuable in settings with high levels of risk behaviours and low levels of HIV/AIDS awareness. Despite their widespread application, there is a lack of economic evaluations comparing different behaviour change communication methods. This paper analyses the costs to increase awareness and the cost-effectiveness to influence behaviour change for five interventions in Benin. Cost and cost-effectiveness analyses used economic costs and primary effectiveness data drawn from surveys. Costs were collected for provider inputs required to implement the interventions in 2009 and analysed by 'person reached'. Cost-effectiveness was analysed by 'person reporting systematic condom use'. Sensitivity analyses were performed on all uncertain variables and major assumptions. Cost-per-person reached varies by method, with public outreach events the least costly (US$2.29) and billboards the most costly (US$25.07). Influence on reported behaviour was limited: only three of the five interventions were found to have a significant statistical correlation with reported condom use (i.e. magazines, radio broadcasts, public outreach events). Cost-effectiveness ratios per person reporting systematic condom use resulted in the following ranking: magazines, radio and public outreach events. Sensitivity analyses indicate rankings are insensitive to variation of key parameters although ratios must be interpreted with caution. This analysis suggests that while individual interventions are an attractive use of resources to raise awareness, this may not translate into a cost-effective impact on behaviour change. The study found that the extensive reach of public outreach events did not seem to influence behaviour change as cost-effectively when compared with magazines or radio broadcasts. Behavioural interventions are context-specific and their effectiveness influenced by a multitude of factors. Further analyses using a quasi-experimental design would be useful to programme implementers and policy makers as they face decisions regarding which HIV prevention activities to prioritize.
Jia, Erik; Chen, Tianlu
2018-01-01
Left-censored missing values commonly exist in targeted metabolomics datasets and can be considered as missing not at random (MNAR). Improper data processing procedures for missing values will cause adverse impacts on subsequent statistical analyses. However, few imputation methods have been developed and applied to the situation of MNAR in the field of metabolomics. Thus, a practical left-censored missing value imputation method is urgently needed. We developed an iterative Gibbs sampler based left-censored missing value imputation approach (GSimp). We compared GSimp with other three imputation methods on two real-world targeted metabolomics datasets and one simulation dataset using our imputation evaluation pipeline. The results show that GSimp outperforms other imputation methods in terms of imputation accuracy, observation distribution, univariate and multivariate analyses, and statistical sensitivity. Additionally, a parallel version of GSimp was developed for dealing with large scale metabolomics datasets. The R code for GSimp, evaluation pipeline, tutorial, real-world and simulated targeted metabolomics datasets are available at: https://github.com/WandeRum/GSimp. PMID:29385130
Comparing top-down and bottom-up costing approaches for economic evaluation within social welfare.
Olsson, Tina M
2011-10-01
This study compares two approaches to the estimation of social welfare intervention costs: one "top-down" and the other "bottom-up" for a group of social welfare clients with severe problem behavior participating in a randomized trial. Intervention costs ranging over a two-year period were compared by intervention category (foster care placement, institutional placement, mentorship services, individual support services and structured support services), estimation method (price, micro costing, average cost) and treatment group (intervention, control). Analyses are based upon 2007 costs for 156 individuals receiving 404 interventions. Overall, both approaches were found to produce reliable estimates of intervention costs at the group level but not at the individual level. As choice of approach can greatly impact the estimate of mean difference, adjustment based on estimation approach should be incorporated into sensitivity analyses. Analysts must take care in assessing the purpose and perspective of the analysis when choosing a costing approach for use within economic evaluation.
Costing behavioral interventions: a practical guide to enhance translation.
Ritzwoller, Debra P; Sukhanova, Anna; Gaglio, Bridget; Glasgow, Russell E
2009-04-01
Cost and cost effectiveness of behavioral interventions are critical parts of dissemination and implementation into non-academic settings. Due to the lack of indicative data and policy makers' increasing demands for both program effectiveness and efficiency, cost analyses can serve as valuable tools in the evaluation process. To stimulate and promote broader use of practical techniques that can be used to efficiently estimate the implementation costs of behavioral interventions, we propose a set of analytic steps that can be employed across a broad range of interventions. Intervention costs must be distinguished from research, development, and recruitment costs. The inclusion of sensitivity analyses is recommended to understand the implications of implementation of the intervention into different settings using different intervention resources. To illustrate these procedures, we use data from a smoking reduction practical clinical trial to describe the techniques and methods used to estimate and evaluate the costs associated with the intervention. Estimated intervention costs per participant were $419, with a range of $276 to $703, depending on the number of participants.
Girman, Cynthia J; Faries, Douglas; Ryan, Patrick; Rotelli, Matt; Belger, Mark; Binkowitz, Bruce; O'Neill, Robert
2014-05-01
The use of healthcare databases for comparative effectiveness research (CER) is increasing exponentially despite its challenges. Researchers must understand their data source and whether outcomes, exposures and confounding factors are captured sufficiently to address the research question. They must also assess whether bias and confounding can be adequately minimized. Many study design characteristics may impact on the results; however, minimal if any sensitivity analyses are typically conducted, and those performed are post hoc. We propose pre-study steps for CER feasibility assessment and to identify sensitivity analyses that might be most important to pre-specify to help ensure that CER produces valid interpretable results.
Khedmat, S; Rouhi, N; Drage, N; Shokouhinejad, N; Nekoofar, M H
2012-11-01
To compare the accuracy of digital radiography (DR), multidetector computed tomography (MDCT) and cone beam computed tomography (CBCT) in detecting vertical root fractures (VRF) in the absence and presence of gutta-percha root filling. The root canals of 100 extracted human single-rooted teeth were prepared and randomly divided into four groups: two experimental groups with artificially fractured root and two intact groups as controls. In one experimental and one control group, a size 40, 0.04 taper gutta-percha cone was inserted in the root canals. Then DR, MDCT and CBCT were performed and the images evaluated. Statistical analyses of sensitivity, specificity and accuracy of each imaging technique in the presence and absence of gutta-percha were calculated and compared. In the absence of gutta-percha, the specificity of DR, MDCT and CBCT was similar. CBCT was the most accurate and sensitive imaging technique (P < 0 .05). In the presence of gutta-percha, the accuracy of MDCT was higher than the other imaging techniques (P < 0.05). The sensitivity of CBCT and MDCT was significantly higher than that of DR (P < 0.05), whereas CBCT was the least specific technique. Under the conditions of this ex vivo study, CBCT was the most sensitive imaging technique in detecting vertical root fracture. The presence of gutta-percha reduced the accuracy, sensitivity and specificity of CBCT but not MDCT. The sensitivity of DR was reduced in the presence of gutta-percha. The use of MDCT as an alternative technique may be recommended when VRF are suspected in root filled teeth. However, as the radiation dose of MDCT is higher than CBCT, the technique could be considered at variance with the principles of ALARA. © 2012 International Endodontic Journal.
Tegegne, Banchamlak; Getie, Sisay; Lemma, Wossenseged; Mohon, Abu Naser; Pillai, Dylan R
2017-01-19
Malaria is a major public health problem and an important cause of maternal and infant morbidity in sub-Saharan Africa, including Ethiopia. Early and accurate diagnosis of malaria with effective treatment is the best strategy for prevention and control of complications during pregnancy and infant morbidity and mortality. However, laboratory diagnosis has relied on the identification of malaria parasites and parasite antigens in peripheral blood using Giemsa-stained microscopy or rapid diagnostic tests (RDTs) which lack analytical and clinical sensitivity. The aim of this study was to evaluate the performance of loop-mediated isothermal amplification (LAMP) for the diagnosis of malaria among malaria suspected pregnant women in Northwest Ethiopia. A cross sectional study was conducted from January to April 2016. Pregnant women (n = 87) suspected of having malaria at six health centres were enrolled. A venous blood sample was collected from each study subject, and analysed for Plasmodium parasites by microscopy, RDT, and LAMP. Diagnostic accuracy outcome measures (sensitivity, specificity, predictive values, and Kappa scores) of microscopy, RDT and LAMP were compared to nested polymerase chain reaction (nPCR) as the gold standard. Specimen processing and reporting times were documented. Using nPCR as the gold standard technique, the sensitivity of microscopy and RDT was 90 and 70%, and the specificity was 98.7 and 97.4%, respectively. LAMP assay was 100% sensitive and 93.5% specific compared to nPCR. This study showed higher sensitivity of LAMP compared to microscopy and RDT for the detection of malaria in pregnancy. Increased sensitivity and ease of use with LAMP in point-of-care testing for malaria in pregnancy was noted. LAMP warrants further evaluation in intermittent screening and treatment programmes in pregnancy.
Shomaker, Lauren B; Kelly, Nichole R; Radin, Rachel M; Cassidy, Omni L; Shank, Lisa M; Brady, Sheila M; Demidowich, Andrew P; Olsen, Cara H; Chen, Kong Y; Stice, Eric; Tanofsky-Kraff, Marian; Yanovski, Jack A
2017-10-01
Depression is associated with poor insulin sensitivity. We evaluated the long-term effects of a cognitive behavioral therapy (CBT) program for prevention of depression on insulin sensitivity in adolescents at risk for type 2 diabetes (T2D) with depressive symptoms. One-hundred nineteen adolescent females with overweight/obesity, T2D family history, and mild-to-moderate depressive symptoms were randomized to a 6-week CBT group (n = 61) or 6-week health education (HE) control group (n = 58). At baseline, posttreatment, and 1 year, depressive symptoms were assessed, and whole body insulin sensitivity (WBISI) was estimated from oral glucose tolerance tests. Dual energy X-ray absorptiometry assessed fat mass at baseline and 1 year. Primary outcomes were 1-year changes in depression and insulin sensitivity, adjusting for adiposity and other relevant covariates. Secondary outcomes were fasting and 2-hr insulin and glucose. We also evaluated the moderating effect of baseline depressive symptom severity. Depressive symptoms decreased in both groups (P < .001). Insulin sensitivity was stable in CBT and HE (ΔWBISI: .1 vs. .3) and did not differ between groups (P = .63). However, among girls with greater (moderate) baseline depressive symptoms (N = 78), those in CBT developed lower 2-hr insulin than those in HE (Δ-16 vs. 16 μIU/mL, P < .05). Additional metabolic benefits of CBT were seen for this subgroup in post hoc analyses of posttreatment to 1-year change. Adolescent females at risk for T2D decreased depressive symptoms and stabilized insulin sensitivity 1 year following brief CBT or HE. Further studies are required to determine if adolescents with moderate depression show metabolic benefits after CBT. © 2017 Wiley Periodicals, Inc.
Wei, Zhenglun Alan; Trusty, Phillip M; Tree, Mike; Haggerty, Christopher M; Tang, Elaine; Fogel, Mark; Yoganathan, Ajit P
2017-01-04
Cardiovascular simulations have great potential as a clinical tool for planning and evaluating patient-specific treatment strategies for those suffering from congenital heart diseases, specifically Fontan patients. However, several bottlenecks have delayed wider deployment of the simulations for clinical use; the main obstacle is simulation cost. Currently, time-averaged clinical flow measurements are utilized as numerical boundary conditions (BCs) in order to reduce the computational power and time needed to offer surgical planning within a clinical time frame. Nevertheless, pulsatile blood flow is observed in vivo, and its significant impact on numerical simulations has been demonstrated. Therefore, it is imperative to carry out a comprehensive study analyzing the sensitivity of using time-averaged BCs. In this study, sensitivity is evaluated based on the discrepancies between hemodynamic metrics calculated using time-averaged and pulsatile BCs; smaller discrepancies indicate less sensitivity. The current study incorporates a comparison between 3D patient-specific CFD simulations using both the time-averaged and pulsatile BCs for 101 Fontan patients. The sensitivity analysis involves two clinically important hemodynamic metrics: hepatic flow distribution (HFD) and indexed power loss (iPL). Paired demographic group comparisons revealed that HFD sensitivity is significantly different between single and bilateral superior vena cava cohorts but no other demographic discrepancies were observed for HFD or iPL. Multivariate regression analyses show that the best predictors for sensitivity involve flow pulsatilities, time-averaged flow rates, and geometric characteristics of the Fontan connection. These predictors provide patient-specific guidelines to determine the effectiveness of analyzing patient-specific surgical options with time-averaged BCs within a clinical time frame. Copyright © 2016 Elsevier Ltd. All rights reserved.
Nanoamplifiers synthesized from gadolinium and gold nanocomposites for magnetic resonance imaging
NASA Astrophysics Data System (ADS)
Tian, Xiumei; Shao, Yuanzhi; He, Haoqiang; Liu, Huan; Shen, Yingying; Huang, Wenlin; Li, Li
2013-03-01
We have synthesized an efficient and highly sensitive nanoamplifier composed of gadolinium-doped silica nanoparticles and gold nanoparticles (AuNPs). Magnetic resonance imaging (MRI) in vitro and in vivo assays revealed enhancement of signal sensitivity, which may be explained by electron transfer between water and gadolinium-doped nanoparticles, apparent in the presence of gold. In vitro and in vivo evaluation demonstrated nanoamplifier incurred minimal cytotoxicity and immunotoxicity, increased stability, and gradual excretion patterns. Tumor targeted properties were preliminarily determined when the nanoamplifier was injected into mouse models of colon cancer liver metastasis. Furthermore, although AuNPs departed from the nanoamplifiers in specific mice tissues, optical and magnetic resonance imaging was efficient, especially in metastatic tumors. These assays validate our nanoamplifier as an effective MRI signal enhancer with sensitive cancer diagnosis potential.We have synthesized an efficient and highly sensitive nanoamplifier composed of gadolinium-doped silica nanoparticles and gold nanoparticles (AuNPs). Magnetic resonance imaging (MRI) in vitro and in vivo assays revealed enhancement of signal sensitivity, which may be explained by electron transfer between water and gadolinium-doped nanoparticles, apparent in the presence of gold. In vitro and in vivo evaluation demonstrated nanoamplifier incurred minimal cytotoxicity and immunotoxicity, increased stability, and gradual excretion patterns. Tumor targeted properties were preliminarily determined when the nanoamplifier was injected into mouse models of colon cancer liver metastasis. Furthermore, although AuNPs departed from the nanoamplifiers in specific mice tissues, optical and magnetic resonance imaging was efficient, especially in metastatic tumors. These assays validate our nanoamplifier as an effective MRI signal enhancer with sensitive cancer diagnosis potential. Electronic supplementary information (ESI) available: Protocols for the characterization, immunotoxicity and pharmacokinetics analyses. Additional supporting figures. See DOI: 10.1039/c3nr00170a
Power and sensitivity of alternative fit indices in tests of measurement invariance.
Meade, Adam W; Johnson, Emily C; Braddy, Phillip W
2008-05-01
Confirmatory factor analytic tests of measurement invariance (MI) based on the chi-square statistic are known to be highly sensitive to sample size. For this reason, G. W. Cheung and R. B. Rensvold (2002) recommended using alternative fit indices (AFIs) in MI investigations. In this article, the authors investigated the performance of AFIs with simulated data known to not be invariant. The results indicate that AFIs are much less sensitive to sample size and are more sensitive to a lack of invariance than chi-square-based tests of MI. The authors suggest reporting differences in comparative fit index (CFI) and R. P. McDonald's (1989) noncentrality index (NCI) to evaluate whether MI exists. Although a general value of change in CFI (.002) seemed to perform well in the analyses, condition specific change in McDonald's NCI values exhibited better performance than a single change in McDonald's NCI value. Tables of these values are provided as are recommendations for best practices in MI testing. PsycINFO Database Record (c) 2008 APA, all rights reserved.
Srivastava, Amrita; Singh, Anumeha; Singh, Satya S; Mishra, Arun K
2017-04-16
An appreciation of comparative microbial survival is most easily done while evaluating their adaptive strategies during stress. In the present experiment, antioxidative and whole cell proteome variations based on spectrophotometric analysis and SDS-PAGE and 2-dimensional gel electrophoresis have been analysed among salt-tolerant and salt-sensitive Frankia strains. This is the first report of proteomic basis underlying salt tolerance in these newly isolated Frankia strains from Hippophae salicifolia D. Don. Salt-tolerant strain HsIi10 shows higher increment in the contents of superoxide dismutase, catalase and ascorbate peroxidase as compared to salt-sensitive strain HsIi8. Differential 2-DGE profile has revealed differential profiles for salt-tolerant and salt-sensitive strains. Proteomic confirmation of salt tolerance in the strains with inbuilt efficiency of thriving in nitrogen-deficient locales is a definite advantage for these microbes. This would be equally beneficial for improvement of soil nitrogen status. Efficient protein regulation in HsIi10 suggests further exploration for its potential use as biofertilizer in saline soils.
Silva, William P P; Stramandinoli-Zanicotti, Roberta T; Schussel, Juliana L; Ramos, Gyl H A; Ioshi, Sergio O; Sassi, Laurindo M
2016-11-01
Objective: This article concerns evaluation of the sensitivity, specificity and accuracy of FNAB for pre-surgical diagnosis of benign and malignant lesions of major and minor salivary glands of patients treated in the Department of Head and Neck Surgery of Erasto Gartner Hospital. Methods: This retrospective study analyzed medical records from January 2006 to December 2011 from patients with salivary gland lesions who underwent preoperative FNAB and, after surgical excision of the lesion, histopathological examination. Results: The study had a cohort of 130 cases, but 34 cases (26.2%) were considered unsatisfactory regarding cytology analyses. Based on the data, sensitivity was 66.7% (6/9), specificity was 81.6% (71/87), accuracy was 80.2% (77/96), the positive predictive value was 66,7% (6/9) and the negative predictive value was 81.6% (71/87). Conclusion: Despite the high rate of inadequate samples obtained in the FNAB in this study the technique offers high specificity, accuracy and acceptable sensitivity. Creative Commons Attribution License
Mixed kernel function support vector regression for global sensitivity analysis
NASA Astrophysics Data System (ADS)
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
Vegter, Stefan; Boersma, Cornelis; Rozenbaum, Mark; Wilffert, Bob; Navis, Gerjan; Postma, Maarten J
2008-01-01
The fields of pharmacogenetics and pharmacogenomics have become important practical tools to progress goals in medical and pharmaceutical research and development. As more screening tests are being developed, with some already used in clinical practice, consideration of cost-effectiveness implications is important. A systematic review was performed on the content of and adherence to pharmacoeconomic guidelines of recent pharmacoeconomic analyses performed in the field of pharmacogenetics and pharmacogenomics. Economic analyses of screening strategies for genetic variations, which were evidence-based and assumed to be associated with drug efficacy or safety, were included in the review. The 20 papers included cover a variety of healthcare issues, including screening tests on several cytochrome P450 (CYP) enzyme genes, thiopurine S-methyltransferase (TMPT) and angiotensin-converting enzyme (ACE) insertion deletion (ACE I/D) polymorphisms. Most economic analyses reported that genetic screening was cost effective and often even clearly dominated existing non-screening strategies. However, we found a lack of standardization regarding aspects such as the perspective of the analysis, factors included in the sensitivity analysis and the applied discount rates. In particular, an important limitation of several studies related to the failure to provide a sufficient evidence-based rationale for an association between genotype and phenotype. Future economic analyses should be conducted utilizing correct methods, with adherence to guidelines and including extensive sensitivity analyses. Most importantly, genetic screening strategies should be based on good evidence-based rationales. For these goals, we provide a list of recommendations for good pharmacoeconomic practice deemed useful in the fields of pharmacogenetics and pharmacogenomics, regardless of country and origin of the economic analysis.
Putsathit, Papanin; Morgan, Justin; Bradford, Damien; Engelhardt, Nelly; Riley, Thomas V
2015-02-01
The Becton Dickinson (BD) PCR-based GeneOhm Cdiff assay has demonstrated a high sensitivity and specificity for detecting Clostridium difficile. Recently, the BD Max platform, using the same principles as BD GeneOhm, has become available in Australia. This study aimed to investigate the sensitivity and specificity of BD Max Cdiff assay for the detection of toxigenic C. difficile in an Australian setting. Between December 2013 and January 2014, 406 stool specimens from 349 patients were analysed with the BD Max Cdiff assay. Direct and enrichment toxigenic culture were performed on bioMérieux ChromID C. difficile agar as a reference method. isolates from specimens with discrepant results were further analysed with an in-house PCR to detect the presence of toxin genes. The overall prevalence of toxigenic C. difficile was 7.2%. Concordance between the BD Max assay and enrichment culture was 98.5%. The sensitivity, specificity, positive predictive value and negative predictive value for the BD Max Cdiff assay were 95.5%, 99.0%, 87.5% and 99.7%, respectively, when compared to direct culture, and 91.7%, 99.0%, 88.0% and 99.4%, respectively, when compared to enrichment culture. The new BD Max Cdiff assay appeared to be an excellent platform for rapid and accurate detection of toxigenic C. difficile.
Hirano, Emi; Fuji, Hiroshi; Onoe, Tsuyoshi; Kumar, Vinay; Shirato, Hiroki; Kawabuchi, Koichi
2014-03-01
The aim of this study is to evaluate the cost-effectiveness of proton beam therapy with cochlear dose reduction compared with conventional X-ray radiotherapy for medulloblastoma in childhood. We developed a Markov model to describe health states of 6-year-old children with medulloblastoma after treatment with proton or X-ray radiotherapy. The risks of hearing loss were calculated on cochlear dose for each treatment. Three types of health-related quality of life (HRQOL) of EQ-5D, HUI3 and SF-6D were used for estimation of quality-adjusted life years (QALYs). The incremental cost-effectiveness ratio (ICER) for proton beam therapy compared with X-ray radiotherapy was calculated for each HRQOL. Sensitivity analyses were performed to model uncertainty in these parameters. The ICER for EQ-5D, HUI3 and SF-6D were $21 716/QALY, $11 773/QALY, and $20 150/QALY, respectively. One-way sensitivity analyses found that the results were sensitive to discount rate, the risk of hearing loss after proton therapy, and costs of proton irradiation. Cost-effectiveness acceptability curve analysis revealed a 99% probability of proton therapy being cost effective at a societal willingness-to-pay value. Proton beam therapy with cochlear dose reduction improves health outcomes at a cost that is within the acceptable cost-effectiveness range from the payer's standpoint.
White, J M L; McFadden, J P; White, I R
2008-03-01
Active patch test sensitization is an uncommon phenomenon which may have undesirable consequences for those undergoing this gold-standard investigation for contact allergy. To perform a retrospective analysis of the results of 241 subjects who were patch tested twice in a monocentre evaluating approximately 1500 subjects per year. Positivity to 11 common allergens in the recommended Baseline Series of contact allergens (European) was analysed: nickel sulphate; Myroxylon pereirae; fragrance mix I; para-phenylenediamine; colophonium; epoxy resin; neomycin; quaternium-15; thiuram mix; sesquiterpene lactone mix; and para-tert-butylphenol resin. Only fragrance mix I gave a statistically significant, increased rate of positivity on the second reading compared with the first (P=0.011). This trend was maintained when separately analysing a subgroup of 42 subjects who had been repeat patch tested within 1 year; this analysis was done to minimize the potential confounding factor of increased usage of fragrances with a wide interval between both tests. To reduce the confounding effect of age on our data, we calculated expected frequencies of positivity to fragrance mix I based on previously published data from our centre. This showed a marked excess of observed cases over predicted ones, particularly in women in the age range 40-60 years. We suspect that active sensitization to fragrance mix I may occur. Similar published analysis from another large group using standard methodology supports our data.
Fishbein, Anna B; Lee, Todd A; Cai, Miao; Oh, Sam S; Eng, Celeste; Hu, Donglei; Huntsman, Scott; Farber, Harold J; Serebrisky, Denise; Silverberg, Jonathan; Williams, L Keoki; Seibold, Max A; Sen, Saunak; Borrell, Luisa N; Avila, Pedro; Rodriguez-Cintron, William; Rodriguez-Santana, Jose R; Burchard, Esteban G; Kumar, Rajesh
2016-07-01
Pest allergen sensitization is associated with asthma morbidity in urban youth but minimally explored in Latino populations. Specifically, the effect of mouse sensitization on the risk of asthma exacerbation has been unexplored in Latino subgroups. To evaluate whether pest allergen sensitization is a predictor of asthma exacerbations and poor asthma control in urban minority children with asthma. Latino and African American children (8-21 years old) with asthma were recruited from 4 sites across the United States. Logistic regression models evaluated the association of mouse or cockroach sensitization with asthma-related acute care visits or hospitalizations. A total of 1,992 children with asthma in the Genes-environments and Admixture in Latino American (GALA-II) and Study of African-Americans, Asthma, Genes, and Environments (SAGE-II) cohorts were studied. Asthmatic children from New York had the highest rate of pest allergen sensitization (42% mouse, 56% cockroach), with the lowest rate in San Francisco (4% mouse, 8% cockroach). Mouse sensitization, more than cockroach, was associated with increased odds of acute care visits (adjusted odds ratio [aOR], 1.47; 95% CI, 1.07-2.03) or hospitalizations (aOR, 3.07; 95% CI, 1.81-5.18), even after controlling for self-reported race and site of recruitment. In stratified analyses, Mexican youth sensitized to mouse allergen did not have higher odds of asthma exacerbation. Other Latino and Puerto Rican youth sensitized to mouse had higher odds of hospitalization for asthma (aORs, 4.57 [95% CI, 1.86-11.22] and 10.01 [95% CI, 1.77-56.6], respectively) but not emergency department visits. Pest allergen sensitization is associated with a higher odds of asthma exacerbations in urban minority youth. Puerto Rican and Other Latino youth sensitized to mouse were more likely to have asthma-related hospitalizations than Mexican youth. Copyright © 2016 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.
Using archived ITS data for sensitivity analyses in the estimation of mobile source emissions
DOT National Transportation Integrated Search
2000-12-01
The study described in this paper demonstrates the use of archived ITS data from San Antonio's TransGuide traffic management center (TMC) for sensitivity analyses in the estimation of on-road mobile source emissions. Because of the stark comparison b...
The lymphocyte transformation test for the diagnosis of drug allergy: sensitivity and specificity.
Nyfeler, B; Pichler, W J
1997-02-01
The diagnosis of a drug allergy is mainly based upon a very detailed history and the clinical findings. In addition, several in vitro or in vivo tests can be performed to demonstrate a sensitization to a certain drug. One of the in vitro tests is the lymphocyte transformation test (LTT), which can reveal a sensitization of T-cells by an enhanced proliferative response of peripheral blood mononuclear cells to a certain drug. To evaluate the sensitivity and specificity of the LTT, 923 case histories of patients with suspected drug allergy in whom a LTT was performed were retrospectively analysed. Based on the history and provocation tests, the probability (P) of a drug allergy was estimated to be > 0.9, 0.5-0.9, 0.1-0.5 or < 0.1, and was put in relation to a positive or negative LTT. Seventy-eight of 100 patients with a very likely drug allergy (P > 0.9) had a positive LTT, which indicates a sensitivity of 78%. If allergies to betalactam-antibiotics were analysed separately, the sensitivity was 74.4%. Fifteen of 102 patients where a classical drug allergy could be excluded (P < 0.1), had nevertheless a positive LTT (specificity thus 85%). The majority of these cases were classified as so-called pseudo-allergic reaction to NSAIDs. Patients with a clear history and clinical findings for a cotrimoxazole-related allergy, all had a positive LTT (6/6), and in patients who reacted to drugs containing proteins, sensitization could be demonstrated as well (i.e. hen's egg lysozyme, 7/7). In 632 of the 923 cases, skin tests were also performed (scratch and/or epicutaneous), for which we found a lower sensitivity than for the LTT (64%), while the specificity was the same (85%). Although our data are somewhat biased by the high number of penicillin allergies and cannot be generalized to drug allergies caused by other compounds, we conclude that the LTT is a useful diagnostic test in drug allergies, able to support the diagnosis of a drug allergy and to pinpoint the relevant drug.
Anderson, Stacey E.; Shane, Hillary; Long, Carrie; Lukomska, Ewa; Meade, B. Jean; Marshall, Nikki B.
2016-01-01
Didecyldimethylammonium chloride (DDAC) is a dialkyl-quaternary ammonium compound that is used in numerous products for its bactericidal, virucidal and fungicidal properties. There have been clinical reports of immediate and delayed hypersensitivity reactions in exposed individuals; however, the sensitization potential of DDAC has not been thoroughly investigated. The purpose of these studies was to evaluate the irritancy and sensitization potential of DDAC following dermal exposure in a murine model. DDAC induced significant irritancy (0.5 and 1%), evaluated by ear swelling in female Balb/c mice. Initial evaluation of the sensitization potential was conducted using the local lymph node assay (LLNA) at concentrations ranging from 0.0625–1%. A concentration-dependent increase in lymphocyte proliferation was observed with a calculated EC3 value of 0.17%. Dermal exposure to DDAC did not induce increased production of IgE as evaluated by phenotypic analysis of draining lymph node B-cells (IgE+B220+) and measurement of total serum IgE levels. Additional phenotypic analyses revealed significant and dose-responsive increases in the absolute number of B-cells, CD4+ T-cells, CD8+ T-cells and dendritic cells in the draining lymph nodes, along with significant increases in the percentage of B-cells (0.25% and 1% DDAC) at Day 10 following 4 days of dermal exposure. There was also a significant and dose-responsive increase in the number of activated CD44 + CD4 + and CD8+ T-cells and CD86+ B-cells and dendritic cells following exposure to all concentrations of DDAC. These results demonstrate the potential for development of irritation and hypersensitivity responses to DDAC following dermal exposure and raise concerns about the use of this chemical and other quaternary ammonium compounds that may elicit similar effects. PMID:27216637
Hemke, Robert; Maas, Mario; van Veenendaal, Mira; Dolman, Koert M; van Rossum, Marion A J; van den Berg, J Merlijn; Kuijpers, Taco W
2014-02-01
To assess the value of magnetic resonance imaging (MRI) in discriminating between active and inactive juvenile idiopathic arthritis (JIA) patients and to compare physical examination outcomes with MRI outcomes in the assessment of disease status in JIA patients. Consecutive JIA patients with knee involvement were prospectively studied using an open-bore MRI. Imaging findings from 146 JIA patients were analysed (59.6% female; mean age, 12.9 years). Patients were classified as clinically active or inactive. MRI features were evaluated using the JAMRIS system, comprising validated scores for synovial hypertrophy, bone marrow oedema, cartilage lesions and bone erosions. Inter-reader reliability was good for all MRI features (intra-class correlation coefficient [ICC] = 0.87-0.94). No differences were found between the two groups regarding MRI scores of bone marrow oedema, cartilage lesions or bone erosions. Synovial hypertrophy scores differed significantly between groups (P = 0.016). Nonetheless, synovial hypertrophy was also present in 14 JIA patients (35.9%) with clinically inactive disease. Of JIA patients considered clinically active, 48.6% showed no signs of MRI-based synovitis. MRI can discriminate between clinically active and inactive JIA patients. However, physical examination is neither very sensitive nor specific in evaluating JIA disease activity compared with MRI. Subclinical synovitis was present in >35% of presumed clinically inactive patients. • MRI is sensitive for evaluating juvenile idiopathic arthritis (JIA) disease activity. • Contrast-enhanced MRI can distinguish clinically active and inactive JIA patients. • Subclinical synovitis is present in 35.9 % of presumed clinically inactive patients. • Physical examination is neither sensitive nor specific in evaluating JIA disease activity.
An enzyme-linked immunosorbent assay for detection of botulinum toxin-antibodies.
Dressler, Dirk; Gessler, Frank; Tacik, Pawel; Bigalke, Hans
2014-09-01
Antibodies against botulinum neurotoxin (BNT-AB) can be detected by the mouse protection assay (MPA), the hemidiaphragm assay (HDA), and by enzyme-linked immunosorbent assays (ELISA). Both MPA and HDA require sacrifice of experimental animals, and they are technically delicate and labor intensive. We introduce a specially developed ELISA for detection of BNT-A-AB and evaluate it against the HDA. Thirty serum samples were tested by HDA and by the new ELISA. Results were compared, and receiver operating characteristic analyses were used to optimize ELISA parameter constellation to obtain either maximal overall accuracy, maximal test sensitivity, or maximal test specificity. When the ELISA is optimized for sensitivity, a sensitivity of 100% and a specificity of 55% can be reached. When it is optimized for specificity, a specificity of 100% and a sensitivity of 90% can be obtained. We present an ELISA for BNT-AB detection that can be-for the first time-customized for special purposes. Adjusted for optimal sensitivity, it reaches the best sensitivity of all BNT-AB tests available. Using the new ELISA together with the HDA as a confirmation test allows testing for BNT-AB in large numbers of patients receiving BT drugs in an economical, fast, and more animal-friendly way. © 2014 International Parkinson and Movement Disorder Society.
VFMA: Topographic Analysis of Sensitivity Data From Full-Field Static Perimetry
Weleber, Richard G.; Smith, Travis B.; Peters, Dawn; Chegarnov, Elvira N.; Gillespie, Scott P.; Francis, Peter J.; Gardiner, Stuart K.; Paetzold, Jens; Dietzsch, Janko; Schiefer, Ulrich; Johnson, Chris A.
2015-01-01
Purpose: To analyze static visual field sensitivity with topographic models of the hill of vision (HOV), and to characterize several visual function indices derived from the HOV volume. Methods: A software application, Visual Field Modeling and Analysis (VFMA), was developed for static perimetry data visualization and analysis. Three-dimensional HOV models were generated for 16 healthy subjects and 82 retinitis pigmentosa patients. Volumetric visual function indices, which are measures of quantity and comparable regardless of perimeter test pattern, were investigated. Cross-validation, reliability, and cross-sectional analyses were performed to assess this methodology and compare the volumetric indices to conventional mean sensitivity and mean deviation. Floor effects were evaluated by computer simulation. Results: Cross-validation yielded an overall R2 of 0.68 and index of agreement of 0.89, which were consistent among subject groups, indicating good accuracy. Volumetric and conventional indices were comparable in terms of test–retest variability and discriminability among subject groups. Simulated floor effects did not negatively impact the repeatability of any index, but large floor changes altered the discriminability for regional volumetric indices. Conclusions: VFMA is an effective tool for clinical and research analyses of static perimetry data. Topographic models of the HOV aid the visualization of field defects, and topographically derived indices quantify the magnitude and extent of visual field sensitivity. Translational Relevance: VFMA assists with the interpretation of visual field data from any perimetric device and any test location pattern. Topographic models and volumetric indices are suitable for diagnosis, monitoring of field loss, patient counseling, and endpoints in therapeutic trials. PMID:25938002
Zhang, Xiang; Faries, Douglas E; Boytsov, Natalie; Stamey, James D; Seaman, John W
2016-09-01
Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
2012-01-01
Background A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. Methods We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). Results The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. Conclusions The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint. PMID:22962944
Adrion, Christine; Mansmann, Ulrich
2012-09-10
A statistical analysis plan (SAP) is a critical link between how a clinical trial is conducted and the clinical study report. To secure objective study results, regulatory bodies expect that the SAP will meet requirements in pre-specifying inferential analyses and other important statistical techniques. To write a good SAP for model-based sensitivity and ancillary analyses involves non-trivial decisions on and justification of many aspects of the chosen setting. In particular, trials with longitudinal count data as primary endpoints pose challenges for model choice and model validation. In the random effects setting, frequentist strategies for model assessment and model diagnosis are complex and not easily implemented and have several limitations. Therefore, it is of interest to explore Bayesian alternatives which provide the needed decision support to finalize a SAP. We focus on generalized linear mixed models (GLMMs) for the analysis of longitudinal count data. A series of distributions with over- and under-dispersion is considered. Additionally, the structure of the variance components is modified. We perform a simulation study to investigate the discriminatory power of Bayesian tools for model criticism in different scenarios derived from the model setting. We apply the findings to the data from an open clinical trial on vertigo attacks. These data are seen as pilot data for an ongoing phase III trial. To fit GLMMs we use a novel Bayesian computational approach based on integrated nested Laplace approximations (INLAs). The INLA methodology enables the direct computation of leave-one-out predictive distributions. These distributions are crucial for Bayesian model assessment. We evaluate competing GLMMs for longitudinal count data according to the deviance information criterion (DIC) or probability integral transform (PIT), and by using proper scoring rules (e.g. the logarithmic score). The instruments under study provide excellent tools for preparing decisions within the SAP in a transparent way when structuring the primary analysis, sensitivity or ancillary analyses, and specific analyses for secondary endpoints. The mean logarithmic score and DIC discriminate well between different model scenarios. It becomes obvious that the naive choice of a conventional random effects Poisson model is often inappropriate for real-life count data. The findings are used to specify an appropriate mixed model employed in the sensitivity analyses of an ongoing phase III trial. The proposed Bayesian methods are not only appealing for inference but notably provide a sophisticated insight into different aspects of model performance, such as forecast verification or calibration checks, and can be applied within the model selection process. The mean of the logarithmic score is a robust tool for model ranking and is not sensitive to sample size. Therefore, these Bayesian model selection techniques offer helpful decision support for shaping sensitivity and ancillary analyses in a statistical analysis plan of a clinical trial with longitudinal count data as the primary endpoint.
Cost-effectiveness of EOB-MRI for Hepatocellular Carcinoma in Japan.
Nishie, Akihiro; Goshima, Satoshi; Haradome, Hiroki; Hatano, Etsuro; Imai, Yasuharu; Kudo, Masatoshi; Matsuda, Masanori; Motosugi, Utaroh; Saitoh, Satoshi; Yoshimitsu, Kengo; Crawford, Bruce; Kruger, Eliza; Ball, Graeme; Honda, Hiroshi
2017-04-01
The objective of the study was to evaluate the cost-effectiveness of gadoxetic acid-enhanced magnetic resonance imaging (EOB-MRI) in the diagnosis and treatment of hepatocellular carcinoma (HCC) in Japan compared with extracellular contrast media-enhanced MRI (ECCM-MRI) and contrast media-enhanced computed tomography (CE-CT) scanning. A 6-stage Markov model was developed to estimate lifetime direct costs and clinical outcomes associated with EOB-MRI. Diagnostic sensitivity and specificity, along with clinical data on HCC survival, recurrence, treatment patterns, costs, and health state utility values, were derived from predominantly Japanese publications. Parameters unavailable from publications were estimated in a Delphi panel of Japanese clinical experts who also confirmed the structure and overall approach of the model. Sensitivity analyses, including one-way, probabilistic, and scenario analyses, were conducted to account for uncertainty in the results. Over a lifetime horizon, EOB-MRI was associated with lower direct costs (¥2,174,869) and generated a greater number of quality-adjusted life years (QALYs) (9.502) than either ECCM-MRI (¥2,365,421, 9.303 QALYs) or CE-CT (¥2,482,608, 9.215 QALYs). EOB-MRI was superior to the other diagnostic strategies considered, and this finding was robust over sensitivity and scenario analyses. A majority of the direct costs associated with HCC in Japan were found to be costs of treatment. The model results revealed the superior cost-effectiveness of the EOB-MRI diagnostic strategy compared with ECCM-MRI and CE-CT. EOB-MRI could be the first-choice imaging modality for medical care of HCC among patients with hepatitis or liver cirrhosis in Japan. Widespread implementation of EOB-MRI could reduce health care expenditures, particularly downstream treatment costs, associated with HCC. Copyright © 2017 Elsevier HS Journals, Inc. All rights reserved.
Bradford, Williamson Z.; Fagan, Elizabeth A.; Glaspole, Ian; Glassberg, Marilyn K.; Glasscock, Kenneth F.; King, Talmadge E.; Lancaster, Lisa H.; Nathan, Steven D.; Pereira, Carlos A.; Sahn, Steven A.; Swigris, Jeffrey J.; Noble, Paul W.
2015-01-01
BACKGROUND: FVC outcomes in clinical trials on idiopathic pulmonary fibrosis (IPF) can be substantially influenced by the analytic methodology and the handling of missing data. We conducted a series of sensitivity analyses to assess the robustness of the statistical finding and the stability of the estimate of the magnitude of treatment effect on the primary end point of FVC change in a phase 3 trial evaluating pirfenidone in adults with IPF. METHODS: Source data included all 555 study participants randomized to treatment with pirfenidone or placebo in the Assessment of Pirfenidone to Confirm Efficacy and Safety in Idiopathic Pulmonary Fibrosis (ASCEND) study. Sensitivity analyses were conducted to assess whether alternative statistical tests and methods for handling missing data influenced the observed magnitude of treatment effect on the primary end point of change from baseline to week 52 in FVC. RESULTS: The distribution of FVC change at week 52 was systematically different between the two treatment groups and favored pirfenidone in each analysis. The method used to impute missing data due to death had a marked effect on the magnitude of change in FVC in both treatment groups; however, the magnitude of treatment benefit was generally consistent on a relative basis, with an approximate 50% reduction in FVC decline observed in the pirfenidone group in each analysis. CONCLUSIONS: Our results confirm the robustness of the statistical finding on the primary end point of change in FVC in the ASCEND trial and corroborate the estimated magnitude of the pirfenidone treatment effect in patients with IPF. TRIAL REGISTRY: ClinicalTrials.gov; No.: NCT01366209; URL: www.clinicaltrials.gov PMID:25856121
Stevanović, Jelena; Pompen, Marjolein; Le, Hoa H.; Rozenbaum, Mark H.; Tieleman, Robert G.; Postma, Maarten J.
2014-01-01
Background Stroke prevention is the main goal of treating patients with atrial fibrillation (AF). Vitamin-K antagonists (VKAs) present an effective treatment in stroke prevention, however, the risk of bleeding and the requirement for regular coagulation monitoring are limiting their use. Apixaban is a novel oral anticoagulant associated with significantly lower hazard rates for stroke, major bleedings and treatment discontinuations, compared to VKAs. Objective To estimate the cost-effectiveness of apixaban compared to VKAs in non-valvular AF patients in the Netherlands. Methods Previously published lifetime Markov model using efficacy data from the ARISTOTLE and the AVERROES trial was modified to reflect the use of oral anticoagulants in the Netherlands. Dutch specific costs, baseline population stroke risk and coagulation monitoring levels were incorporated. Univariate, probabilistic sensitivity and scenario analyses on the impact of different coagulation monitoring levels were performed on the incremental cost-effectiveness ratio (ICER). Results Treatment with apixaban compared to VKAs resulted in an ICER of €10,576 per quality adjusted life year (QALY). Those findings correspond with lower number of strokes and bleedings associated with the use of apixaban compared to VKAs. Univariate sensitivity analyses revealed model sensitivity to the absolute stroke risk with apixaban and treatment discontinuations risks with apixaban and VKAs. The probability that apixaban is cost-effective at a willingness-to-pay threshold of €20,000/QALY was 68%. Results of the scenario analyses on the impact of different coagulation monitoring levels were quite robust. Conclusions In patients with non-valvular AF, apixaban is likely to be a cost-effective alternative to VKAs in the Netherlands. PMID:25093723
Cost-effectiveness of pharmacist-participated warfarin therapy management in Thailand.
Saokaew, Surasak; Permsuwan, Unchalee; Chaiyakunapruk, Nathorn; Nathisuwan, Surakit; Sukonthasarn, Apichard; Jeanpeerapong, Napawan
2013-10-01
Although pharmacist-participated warfarin therapy management (PWTM) is well established, the economic evaluation of PWTM is still lacking particularly in Asia-Pacific region. The objective of this study was to estimate the cost-effectiveness of PWTM in Thailand using local data where available. A Markov model was used to compare lifetime costs and quality-adjusted life years (QALYs) accrued to patients receiving warfarin therapy through PWTM or usual care (UC). The model was populated with relevant information from both health care system and societal perspectives. Input data were obtained from literatures and database analyses. Incremental cost-effectiveness ratios (ICERs) were presented as year 2012 values. A base-case analysis was performed for patients at age 45 years old. Sensitivity analyses including one-way and probabilistic sensitivity analyses were constructed to determine the robustness of the findings. From societal perspective, PWTM and UC results in 39.5 and 38.7 QALY, respectively. Thus, PWTM increase QALY by 0.79, and increase costs by 92,491 THB (3,083 USD) compared with UC (ICER 116,468 THB [3,882.3 USD] per QALY gained). While, from health care system perspective, PWTM also results in 0.79 QALY, and increase costs by 92,788 THB (3,093 USD) compared with UC (ICER 116,842 THB [3,894.7 USD] per QALY gained). Thus, PWTM was cost-effective compared with usual care, assuming willingness-to-pay (WTP) of 150,000 THB/QALY. Results were sensitive to the discount rate and cost of clinic set-up. Our finding suggests that PWTM is a cost-effective intervention. Policy-makers may consider our finding as part of information in their decision-making for implementing this strategy into healthcare benefit package. Further updates when additional data available are needed. © 2013.
NASA Astrophysics Data System (ADS)
Jacquin, A. P.; Shamseldin, A. Y.
2009-04-01
This study analyses the sensitivity of the parameters of Takagi-Sugeno-Kang rainfall-runoff fuzzy models previously developed by the authors. These models can be classified in two types, where the first type is intended to account for the effect of changes in catchment wetness and the second type incorporates seasonality as a source of non-linearity in the rainfall-runoff relationship. The sensitivity analysis is performed using two global sensitivity analysis methods, namely Regional Sensitivity Analysis (RSA) and Sobol's Variance Decomposition (SVD). In general, the RSA method has the disadvantage of not being able to detect sensitivities arising from parameter interactions. By contrast, the SVD method is suitable for analysing models where the model response surface is expected to be affected by interactions at a local scale and/or local optima, such as the case of the rainfall-runoff fuzzy models analysed in this study. The data of six catchments from different geographical locations and sizes are used in the sensitivity analysis. The sensitivity of the model parameters is analysed in terms of two measures of goodness of fit, assessing the model performance from different points of view. These measures are the Nash-Sutcliffe criterion and the index of volumetric fit. The results of the study show that the sensitivity of the model parameters depends on both the type of non-linear effects (i.e. changes in catchment wetness or seasonality) that dominates the catchment's rainfall-runoff relationship and the measure used to assess the model performance. Acknowledgements: This research was supported by FONDECYT, Research Grant 11070130. We would also like to express our gratitude to Prof. Kieran M. O'Connor from the National University of Ireland, Galway, for providing the data used in this study.
Fabbrocini, Adele; D'Adamo, Raffaele; Del Prete, Francesco; Langellotti, Antonio Luca; Rinna, Francesca; Silvestri, Fausto; Sorrenti, Gerarda; Vitiello, Valentina; Sansone, Giovanni
2012-10-01
The aim of this study was to evaluate the feasibility of using cryopreserved S. aurata semen in spermiotoxicity tests. Cryopreservation is a biotechnology that can provide viable gametes and embryos on demand, rather than only in the spawning season, thus overcoming a limitation that has hindered the use of some species in ecotoxicological bioassays. Firstly, the sperm motility pattern of cryopreserved semen was evaluated after thawing by means of both visual and computer-assisted analyses. Motility parameters in the cryopreserved semen did not change significantly in the first hour after thawing, meaning that they were maintained for long enough to enable their use in spermiotoxicity tests. In the second phase of the research, bioassays were performed, using cadmium as the reference toxicant, in order to evaluate the sensitivity of cryopreserved S. aurata semen to ecotoxicological contamination. The sensitivity of the sperm motility parameters used as endpoints (motility percentages and velocities) proved to be comparable to what has been recorded for the fresh semen of other aquatic species (LOECs from 0.02 to 0.03 mg L(-1)). The test showed good reliability and was found to be rapid and easy to perform, requiring only a small volume of the sample. Moreover, cryopreserved semen is easy to store and transfer and makes it possible to perform bioassays in different sites or at different times with the same batch of semen. The proposed bioassay is therefore a promising starting point for the development of toxicity tests that are increasingly tailored to the needs of ecotoxicology and environmental quality evaluation strategies. Copyright © 2012 Elsevier Inc. All rights reserved.
Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design
NASA Technical Reports Server (NTRS)
Kuguoglu, Latife; Ludwiczak, Damian
2006-01-01
The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.
Morgado, José Mário T; Sánchez-Muñoz, Laura; Teodósio, Cristina G; Jara-Acevedo, Maria; Alvarez-Twose, Iván; Matito, Almudena; Fernández-Nuñez, Elisa; García-Montero, Andrés; Orfao, Alberto; Escribano, Luís
2012-04-01
Aberrant expression of CD2 and/or CD25 by bone marrow, peripheral blood or other extracutaneous tissue mast cells is currently used as a minor World Health Organization diagnostic criterion for systemic mastocytosis. However, the diagnostic utility of CD2 versus CD25 expression by mast cells has not been prospectively evaluated in a large series of systemic mastocytosis. Here we evaluate the sensitivity and specificity of CD2 versus CD25 expression in the diagnosis of systemic mastocytosis. Mast cells from a total of 886 bone marrow and 153 other non-bone marrow extracutaneous tissue samples were analysed by multiparameter flow cytometry following the guidelines of the Spanish Network on Mastocytosis at two different laboratories. The 'CD25+ and/or CD2+ bone marrow mast cells' World Health Organization criterion showed an overall sensitivity of 100% with 99.0% specificity for the diagnosis of systemic mastocytosis whereas CD25 expression alone presented a similar sensitivity (100%) with a slightly higher specificity (99.2%). Inclusion of CD2 did not improve the sensitivity of the test and it decreased its specificity. In tissues other than bone marrow, the mast cell phenotypic criterion revealed to be less sensitive. In summary, CD2 expression does not contribute to improve the diagnosis of systemic mastocytosis when compared with aberrant CD25 expression alone, which supports the need to update and replace the minor World Health Organization 'CD25+ and/or CD2+' mast cell phenotypic diagnostic criterion by a major criterion based exclusively on CD25 expression.
Barshes, Neal R; Flores, Everardo; Belkin, Michael; Kougias, Panos; Armstrong, David G; Mills, Joseph L
2016-12-01
Patients with diabetic foot ulcers (DFUs) should be evaluated for peripheral artery disease (PAD). We sought to estimate the overall diagnostic accuracy for various strategies that are used to identify PAD in this population. A Markov model with probabilistic and deterministic sensitivity analyses was used to simulate the clinical events in a population of 10,000 patients with diabetes. One of 14 different diagnostic strategies was applied to those who developed DFUs. Baseline data on diagnostic accuracy of individual noninvasive tests were based on a meta-analysis of previously reported studies. The overall sensitivity and cost-effectiveness of the 14 strategies were then compared. The overall sensitivity of various combinations of diagnostic testing strategies ranged from 32.6% to 92.6%. Cost-effective strategies included ankle-brachial indices for all patients; skin perfusion pressures (SPPs) or toe-brachial indices (TBIs) for all patients; and SPPs or TBIs to corroborate normal pulse examination findings, a strategy that lowered leg amputation rates by 36%. Strategies that used noninvasive vascular testing to investigate only abnormal pulse examination results had low overall diagnostic sensitivity and were weakly dominated in cost-effectiveness evaluations. Population prevalence of PAD did not alter strategy ordering by diagnostic accuracy or cost-effectiveness. TBIs or SPPs used uniformly or to corroborate a normal pulse examination finding are among the most sensitive and cost-effective strategies to improve the identification of PAD among patients presenting with DFUs. These strategies may significantly reduce leg amputation rates with only modest increases in cost. Published by Elsevier Inc.
Simoens, Steven
2013-01-01
Objectives This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. Materials and Methods For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Results Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Conclusions Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation. PMID:24386474
Simoens, Steven
2013-01-01
This paper aims to assess the methodological quality of economic evaluations included in Belgian reimbursement applications for Class 1 drugs. For 19 reimbursement applications submitted during 2011 and Spring 2012, a descriptive analysis assessed the methodological quality of the economic evaluation, evaluated the assessment of that economic evaluation by the Drug Reimbursement Committee and the response to that assessment by the company. Compliance with methodological guidelines issued by the Belgian Healthcare Knowledge Centre was assessed using a detailed checklist of 23 methodological items. The rate of compliance was calculated based on the number of economic evaluations for which the item was applicable. Economic evaluations tended to comply with guidelines regarding perspective, target population, subgroup analyses, comparator, use of comparative clinical data and final outcome measures, calculation of costs, incremental analysis, discounting and time horizon. However, more attention needs to be paid to the description of limitations of indirect comparisons, the choice of an appropriate analytic technique, the expression of unit costs in values for the current year, the estimation and valuation of outcomes, the presentation of results of sensitivity analyses, and testing the face validity of model inputs and outputs. Also, a large variation was observed in the scope and depth of the quality assessment by the Drug Reimbursement Committee. Although general guidelines exist, pharmaceutical companies and the Drug Reimbursement Committee would benefit from the existence of a more detailed checklist of methodological items that need to be reported in an economic evaluation.
Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes
NASA Astrophysics Data System (ADS)
Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias
2015-04-01
Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.
Kurth, Laura; Doney, Brent; Weinmann, Sheila
2017-01-01
Objectives To compare the occupational exposure levels assigned by our National Institute for Occupational Safety and Health chronic obstructive pulmonary disease-specific job exposure matrix (NIOSH COPD JEM) and by expert evaluation of detailed occupational information for various jobs held by members of an integrated health plan in the Northwest USA. Methods We analysed data from a prior study examining COPD and occupational exposures. Jobs were assigned exposure levels using 2 methods: (1) the COPD JEM and (2) expert evaluation. Agreement (Cohen’s κ coefficients), sensitivity and specificity were calculated to compare exposure levels assigned by the 2 methods for 8 exposure categories. Results κ indicated slight to moderate agreement (0.19–0.51) between the 2 methods and was highest for organic dust and overall exposure. Sensitivity of the matrix ranged from 33.9% to 68.5% and was highest for sensitisers, diesel exhaust and overall exposure. Specificity ranged from 74.7% to 97.1% and was highest for fumes, organic dust and mineral dust. Conclusions This COPD JEM was compared with exposures assigned by experts and offers a generalisable approach to assigning occupational exposure. PMID:27777373
Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization
NASA Technical Reports Server (NTRS)
Baysal, Oktay
1995-01-01
The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.
Applying Propensity Score Methods in Medical Research: Pitfalls and Prospects
Luo, Zhehui; Gardiner, Joseph C.; Bradley, Cathy J.
2012-01-01
The authors review experimental and nonexperimental causal inference methods, focusing on assumptions for the validity of instrumental variables and propensity score (PS) methods. They provide guidance in four areas for the analysis and reporting of PS methods in medical research and selectively evaluate mainstream medical journal articles from 2000 to 2005 in the four areas, namely, examination of balance, overlapping support description, use of estimated PS for evaluation of treatment effect, and sensitivity analyses. In spite of the many pitfalls, when appropriately evaluated and applied, PS methods can be powerful tools in assessing average treatment effects in observational studies. Appropriate PS applications can create experimental conditions using observational data when randomized controlled trials are not feasible and, thus, lead researchers to an efficient estimator of the average treatment effect. PMID:20442340
Capacity of clinical pathways--a strategic multi-level evaluation tool.
Cardoen, Brecht; Demeulemeester, Erik
2008-12-01
In this paper we strategically evaluate the efficiency of clinical pathways and their complex interdependencies with respect to joint resource usage and patient throughput. We propose a discrete-event simulation approach that allows for the simultaneous evaluation of multiple clinical pathways and the inherent uncertainty (resource, duration and arrival) that accompanies medical processes. Both the consultation suite and the surgery suite may be modeled and examined in detail by means of sensitivity or scenario analyses. Since each medical facility can somehow be represented as a combination of clinical pathways, i.e. they are conceptually similar, the simulation model is generic in nature. Next to the formulation of the model, we illustrate its applicability by means of a case study that was conducted in a Belgian hospital.
Loveman, E; Cooper, K; Bryant, J; Colquitt, J L; Frampton, G K; Clegg, A
2012-01-01
The present report was commissioned as a supplement to an existing technology assessment report produced by the Peninsula Technology Assessment Group (PenTAG), which evaluated the clinical effectiveness and cost-effectiveness of dasatinib and nilotinib in patients who are either resistant or intolerant to standard-dose imatinib. This report evaluates the clinical effectiveness and cost-effectiveness of dasatinib, nilotinib and high-dose imatinib within their licensed indications for the treatment of people with chronic myeloid leukaemia (CML) who are resistant to standard-dose imatinib. Bibliographic databases were searched from inception to January 2011, including The Cochrane Library, MEDLINE (Ovid), EMBASE (Ovid), and MEDLINE In-Process & Other Non-Indexed Citations. Bibliographies of related papers were screened, key conferences were searched, and experts were contacted to identify additional published and unpublished references. This report includes systematic reviews of clinical effectiveness and cost-effectiveness studies, an independent appraisal of information submitted by drug manufacturers to the National Institute for Health and Clinical Excellence (NICE), an independent appraisal of the PenTAG economic evaluation, and new economic analyses adapting the PenTAG economic model. Standard systematic procedures involving two reviewers to maintain impartiality and transparency, and to minimise bias, were conducted. Eleven studies met the inclusion criteria. Four of these studies included new data published since the PenTAG report; all of these were in chronic-phase CML. No relevant studies on the clinical effectiveness of nilotinib were found. The clinical effectiveness studies on dasatinib [one arm of a randomised controlled trial (RCT)] and high-dose imatinib (one arm of a RCT and three single-arm cohort studies) had major methodological limitations. These limitations precluded a comparison of the different arms within the RCT. Data from the studies are summarised in this report, but caution in interpretation is required. One economic evaluation was identified that compared dasatinib with high-dose imatinib in patients with chronic-phase CML who were CML resistant to standard-dose imatinib. Two industry submissions and the PenTAG economic evaluation were critiqued and differences in the assumptions and results were identified. The PenTAG economic model was adapted and new analyses conducted for the interventions dasatinib, nilotinib and high-dose imatinib and the comparators interferon alfa, standard-dose imatinib, stem cell transplantation and hydroxycarbamide. The results suggest that the three interventions, dasatinib, nilotinib and high-dose imatinib, have similar costs and cost-effectiveness compared with hydroxycarbamide, with a cost-effectiveness of around £30,000 per quality-adjusted life-year gained. However, it is not possible to derive firm conclusions about the relative cost-effectiveness of the three interventions owing to great uncertainty around data inputs. Uncertainty was explored using deterministic sensitivity analyses, threshold analyses and probabilistic sensitivity analyses. The paucity of good-quality evidence should be considered when interpreting this report. This review has identified very limited new information on clinical effectiveness of the interventions over that already shown in the PenTAG report. Limitations in the data exist; however, the results of single-arm studies suggest that the interventions can lead to improvements in haematological and cytogenetic responses in people with imatinib-resistant CML. The economic analyses do not highlight any one of the interventions as being the most cost-effective; however, the analysis results are highly uncertain owing to lack of agreement on appropriate assumptions. Recommendations for future research made by PenTAG, for a good-quality RCT comparing the three treatments remain.
Chopra, Amit; Kalkanis, Alexandros; Judson, Marc A
2016-11-01
Numerous biomarkers have been evaluated for the diagnosis, assessment of disease activity, prognosis, and response to treatment in sarcoidosis. In this report, we discuss the clinical and research utility of several biomarkers used to evaluate sarcoidosis. Areas covered: The sarcoidosis biomarkers discussed include serologic tests, imaging studies, identification of inflammatory cells and genetic analyses. Literature was obtained from medical databases including PubMed and Web of Science. Expert commentary: Most of the biomarkers examined in sarcoidosis are not adequately specific or sensitive to be used in isolation to make clinical decisions. However, several sarcoidosis biomarkers have an important role in the clinical management of sarcoidosis when they are coupled with clinical data including the results of other biomarkers.
Lee, Chang Won; Kwak, N K
2011-04-01
This paper deals with strategic enterprise resource planning (ERP) in a health-care system using a multicriteria decision-making (MCDM) model. The model is developed and analyzed on the basis of the data obtained from a leading patient-oriented provider of health-care services in Korea. Goal criteria and priorities are identified and established via the analytic hierarchy process (AHP). Goal programming (GP) is utilized to derive satisfying solutions for designing, evaluating, and implementing an ERP. The model results are evaluated and sensitivity analyses are conducted in an effort to enhance the model applicability. The case study provides management with valuable insights for planning and controlling health-care activities and services.
Wysham, Weiya Z; Schaffer, Elisabeth M; Coles, Theresa; Roque, Dario R; Wheeler, Stephanie B; Kim, Kenneth H
2017-05-01
AURELIA, a randomized phase III trial of adding bevacizumab (B) to single agent chemotherapy (CT) for the treatment of platinum-resistant recurrent ovarian cancer, demonstrated improved progression free survival (PFS) in the B+CT arm compared to CT alone. We aimed to evaluate the cost effectiveness of adding B to CT in the treatment of platinum-resistant recurrent ovarian cancer. A decision tree model was constructed to evaluate the cost effectiveness of adding bevacizumab (B) to single agent chemotherapy (CT) based on the arms of the AURELIA trial. Costs, quality-adjusted life years (QALYs), and progression free survival (PFS) were modeled over fifteen months. Model inputs were extracted from published literature and public sources. Incremental cost effectiveness ratios (ICERs) per QALY gained and ICERs per progression free life year saved (PF-LYS) were calculated. One-way sensitivity analyses were performed to evaluate the robustness of results. The ICER associated with B+CT is $410,455 per QALY gained and $217,080 per PF-LYS. At a willingness to pay (WTP) threshold of $50,000/QALY, adding B to single agent CT is not cost effective for this patient population. Even at a WTP threshold of $100,000/QALY, B+CT is not cost effective. These findings are robust to sensitivity analyses. Despite gains in QALY and PFS, the addition of B to single agent CT for treatment of platinum-resistant recurrent ovarian cancer is not cost effective. Benefits, risks, and costs associated with treatment should be taken into consideration when prescribing chemotherapy for this patient population. Copyright © 2017 Elsevier Inc. All rights reserved.
Kuznik, Andreas; Bégo-Le-Bagousse, Gaëlle; Eckert, Laurent; Gadkari, Abhijit; Simpson, Eric; Graham, Christopher N; Miles, LaStella; Mastey, Vera; Mahajan, Puneet; Sullivan, Sean D
2017-12-01
Dupilumab significantly improves signs and symptoms of atopic dermatitis (AD), including pruritus, symptoms of anxiety and depression, and health-related quality of life versus placebo in adults with moderate-to-severe AD. Since the cost-effectiveness of dupilumab has not been evaluated, the objective of this analysis was to estimate a value-based price range in which dupilumab would be considered cost-effective compared with supportive care (SC) for treatment of moderate-to-severe AD in an adult population. A health economic model was developed to evaluate from the US payer perspective the long-term costs and benefits of dupilumab treatment administered every other week (q2w). Dupilumab q2w was compared with SC; robustness of assumptions and results were tested using sensitivity and scenario analyses. Clinical data were derived from the dupilumab LIBERTY AD SOLO trials; healthcare use and cost data were from health insurance claims histories of adult patients with AD. The annual price of maintenance therapy with dupilumab to be considered cost-effective was estimated for decision thresholds of US$100,000 and $150,000 per quality-adjusted life-year (QALY) gained. In the base case, the annual maintenance price for dupilumab therapy to be considered cost-effective would be $28,770 at a $100,000 per QALY gained threshold, and $39,940 at a $150,000 threshold. Results were generally robust to parameter variations in one-way and probabilistic sensitivity analyses. Dupilumab q2w compared with SC is cost-effective for the treatment of moderate-to-severe AD in US adults at an annual price of maintenance therapy in the range of $29,000-$40,000 at the $100,000-$150,000 per QALY thresholds. Sanofi and Regeneron Pharmaceuticals, Inc.
Validation of the Brazilian Portuguese Version of Geriatric Anxiety Inventory--GAI-BR.
Massena, Patrícia Nitschke; de Araújo, Narahyana Bom; Pachana, Nancy; Laks, Jerson; de Pádua, Analuiza Camozzato
2015-07-01
The Geriatric Anxiety Inventory (GAI) is a recently developed scale aiming to evaluate symptoms of anxiety in later life. This 20-item scale uses dichotomous answers highlighting non-somatic anxiety complaints of elderly people. The present study aimed to evaluate the psychometric properties of the Brazilian Portuguese version GAI (GAI-BR) in a sample from community and outpatient psychogeriatric clinic. A mixed convenience sample of 72 subjects was recruited for answering the research protocol. The interview procedures were structured with questionnaires about sociodemographic data, clinical health status, anxiety, and depression previously validated instruments, Mini-Mental State Examination, Mini International Neuropsychiatric Interview, and GAI-BR. Twenty-two percent of the sample were interviewed twice for test-retest reliability. For internal consistency analyses, the Cronbach's α test was applied. The Spearman correlation test was applied to evaluate the test-retest GAI-BR reliability. A ROC (receiver operating characteristic) curve study was made to estimate the GAI-BR area under curve, cut-off points, sensitivity, and specificity for the Generalized Anxiety Disorder diagnosis. The GAI-BR version showed high internal consistency (Cronbach's α = 0.91) and strong and significant test-retest reliability (ρ = 0.85, p < 0.001). It also showed moderate and significant correlation with the Beck Anxiety Inventory (ρ = 0.68, p < 0.001) and the State-Trait Anxiety Inventory (ρ = 0.61, p < 0.001) showing evidence of concurrent validation. The cut-off point of 13 estimated by ROC curve analyses showed sensitivity of 83.3% and specificity of 84.6% to detect Generalized Anxiety Disorder (DSM-IV). GAI-BR has demonstrated very good psychometric properties and can be a reliable instrument to measure anxiety in Brazilian elderly people.
French, Simon D; Green, Sally E; Francis, Jill J; Buchbinder, Rachelle; O'Connor, Denise A; Grimshaw, Jeremy M; Michie, Susan
2015-01-01
Objectives Implementation intervention effects can only be fully realised and understood if they are faithfully delivered. However the evaluation of implementation intervention fidelity is not commonly undertaken. The IMPLEMENT intervention was designed to improve the management of low back pain by general medical practitioners. It consisted of a two-session interactive workshop, including didactic presentations and small group discussions by trained facilitators. This study aimed to evaluate the fidelity of the IMPLEMENT intervention by assessing: (1) observed facilitator adherence to planned behaviour change techniques (BCTs); (2) comparison of observed and self-reported adherence to planned BCTs and (3) variation across different facilitators and different BCTs. Design The study compared planned and actual, and observed versus self-assessed delivery of BCTs during the IMPLEMENT workshops. Method Workshop sessions were audiorecorded and transcribed verbatim. Observed adherence of facilitators to the planned intervention was assessed by analysing the workshop transcripts in terms of BCTs delivered. Self-reported adherence was measured using a checklist completed at the end of each workshop session and was compared with the ‘gold standard’ of observed adherence using sensitivity and specificity analyses. Results The overall observed adherence to planned BCTs was 79%, representing moderate-to-high intervention fidelity. There was no significant difference in adherence to BCTs between the facilitators. Sensitivity of self-reported adherence was 95% (95% CI 88 to 98) and specificity was 30% (95% CI 11 to 60). Conclusions The findings suggest that the IMPLEMENT intervention was delivered with high levels of adherence to the planned intervention protocol. Trial registration number The IMPLEMENT trial was registered in the Australian New Zealand Clinical Trials Registry, ACTRN012606000098538 (http://www.anzctr.org.au/trial_view.aspx?ID=1162). PMID:26155819
de Camargo, Kélvia Cristina; Alves, Rosane Ribeiro Figueiredo; Baylão, Luciano Augusto; Ribeiro, Andrea Alves; Araujo, Nadja Lindany Alves de Souza; Tavares, Suelene Brito do Nascimento; dos Santos, Sílvia Helena Rabelo
2015-05-01
To estimate the prevalence of bacterial vaginosis (BV), candidiasis and trichomoniasis and compare the findings of physical examination of the vaginal secretion with the microbiological diagnosis obtained by cytology study of a vaginal smear using the Papanicolaou method. A cross-sectional study of 302 women aged 20 to 87 years, interviewed and submitted to a gynecology test for the evaluation of vaginal secretion and collection of a cytology smear, from June 2012 to May 2013. Sensitivity analyses were carried out and specificity, positive predictive value (PPV) and negative predictive value (NPV) with their respective 95%CI were determined to assess the accuracy of the characteristics of vaginal secretion in relation to the microbiological diagnosis of the cytology smear . The kappa index (k) was used to assess the degree of agreement between the clinical features of vaginal secretion and the microbiological findings obtained by cytology. RESULTS The prevalence of BV, candidiasis and trichomoniasis was 25.5, 9.3 and 2.0%, respectively. The sensitivity, specificity, PPV and NPV of the clinical characteristics of vaginal secretion for the cytological diagnosis of BV were 74, 78.6, 54.3 and 89.9%, respectively. The sensitivity, specificity, PPV and the NPV of the clinical characteristics of vaginal secretion for the cytological diagnosis of candidiasis were 46.4, 86.2, 25.5 and 94%, respectively. The correlation between the clinical evaluation of vaginal secretion and the microbiological diagnosis of BV, candidiasis and trichomoniasis, assessed by the kappa index, was 0.47, 0.23 and 0.28, respectively. CONCLUSION The most common cause of abnormal vaginal secretion was BV. The clinical evaluation of vaginal secretion presented amoderate to weak agreement with the microbiological diagnosis, indicating the need for complementary investigation of the clinical findings of abnormal vaginal secretion.
Hénaux, Viviane; Calavas, Didier
2017-01-01
Surveillance systems of exotic infectious diseases aim to ensure transparency about the country-specific animal disease situation (i.e. demonstrate disease freedom) and to identify any introductions. In a context of decreasing resources, evaluation of surveillance efficiency is essential to help stakeholders make relevant decisions about prioritization of measures and funding allocation. This study evaluated the efficiency (sensitivity related to cost) of the French bovine brucellosis surveillance system using stochastic scenario tree models. Cattle herds were categorized into three risk groups based on the annual number of purchases, given that trading is considered as the main route of brucellosis introduction in cattle herds. The sensitivity in detecting the disease and the costs of the current surveillance system, which includes clinical (abortion) surveillance, programmed serological testing and introduction controls, were compared to those of 19 alternative surveillance scenarios. Surveillance costs included veterinary fees and laboratory analyses. The sensitivity over a year of the current surveillance system was predicted to be 91±7% at a design prevalence of 0.01% for a total cost of 14.9±1.8 million €. Several alternative surveillance scenarios, based on clinical surveillance and random or risk-based serological screening in a sample (20%) of the population, were predicted to be at least as sensitive but for a lower cost. Such changes would reduce whole surveillance costs by 20 to 61% annually, and the costs for farmers only would be decreased from about 12.0 million € presently to 5.3-9.0 million € (i.e. 25-56% decrease). Besides, fostering the evolution of the surveillance system in one of these directions would be in agreement with the European regulations and farmers perceptions on brucellosis risk and surveillance.
Calavas, Didier
2017-01-01
Surveillance systems of exotic infectious diseases aim to ensure transparency about the country-specific animal disease situation (i.e. demonstrate disease freedom) and to identify any introductions. In a context of decreasing resources, evaluation of surveillance efficiency is essential to help stakeholders make relevant decisions about prioritization of measures and funding allocation. This study evaluated the efficiency (sensitivity related to cost) of the French bovine brucellosis surveillance system using stochastic scenario tree models. Cattle herds were categorized into three risk groups based on the annual number of purchases, given that trading is considered as the main route of brucellosis introduction in cattle herds. The sensitivity in detecting the disease and the costs of the current surveillance system, which includes clinical (abortion) surveillance, programmed serological testing and introduction controls, were compared to those of 19 alternative surveillance scenarios. Surveillance costs included veterinary fees and laboratory analyses. The sensitivity over a year of the current surveillance system was predicted to be 91±7% at a design prevalence of 0.01% for a total cost of 14.9±1.8 million €. Several alternative surveillance scenarios, based on clinical surveillance and random or risk-based serological screening in a sample (20%) of the population, were predicted to be at least as sensitive but for a lower cost. Such changes would reduce whole surveillance costs by 20 to 61% annually, and the costs for farmers only would be decreased from about 12.0 million € presently to 5.3–9.0 million € (i.e. 25–56% decrease). Besides, fostering the evolution of the surveillance system in one of these directions would be in agreement with the European regulations and farmers perceptions on brucellosis risk and surveillance. PMID:28859107
Elphic, Richard C.; Feldman, William C.; Funsten, Herbert O.; Prettyman, Thomas H.
2010-01-01
Abstract Orbital neutron spectroscopy has become a standard technique for measuring planetary surface compositions from orbit. While this technique has led to important discoveries, such as the deposits of hydrogen at the Moon and Mars, a limitation is its poor spatial resolution. For omni-directional neutron sensors, spatial resolutions are 1–1.5 times the spacecraft's altitude above the planetary surface (or 40–600 km for typical orbital altitudes). Neutron sensors with enhanced spatial resolution have been proposed, and one with a collimated field of view is scheduled to fly on a mission to measure lunar polar hydrogen. No quantitative studies or analyses have been published that evaluate in detail the detection and sensitivity limits of spatially resolved neutron measurements. Here, we describe two complementary techniques for evaluating the hydrogen sensitivity of spatially resolved neutron sensors: an analytic, closed-form expression that has been validated with Lunar Prospector neutron data, and a three-dimensional modeling technique. The analytic technique, called the Spatially resolved Neutron Analytic Sensitivity Approximation (SNASA), provides a straightforward method to evaluate spatially resolved neutron data from existing instruments as well as to plan for future mission scenarios. We conclude that the existing detector—the Lunar Exploration Neutron Detector (LEND)—scheduled to launch on the Lunar Reconnaissance Orbiter will have hydrogen sensitivities that are over an order of magnitude poorer than previously estimated. We further conclude that a sensor with a geometric factor of ∼ 100 cm2 Sr (compared to the LEND geometric factor of ∼ 10.9 cm2 Sr) could make substantially improved measurements of the lunar polar hydrogen spatial distribution. Key Words: Planetary instrumentation—Planetary science—Moon—Spacecraft experiments—Hydrogen. Astrobiology 10, 183–200. PMID:20298147
Probabilistic Evaluation of Advanced Ceramic Matrix Composite Structures
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2003-01-01
The objective of this report is to summarize the deterministic and probabilistic structural evaluation results of two structures made with advanced ceramic composites (CMC): internally pressurized tube and uniformly loaded flange. The deterministic structural evaluation includes stress, displacement, and buckling analyses. It is carried out using the finite element code MHOST, developed for the 3-D inelastic analysis of structures that are made with advanced materials. The probabilistic evaluation is performed using the integrated probabilistic assessment of composite structures computer code IPACS. The affects of uncertainties in primitive variables related to the material, fabrication process, and loadings on the material property and structural response behavior are quantified. The primitive variables considered are: thermo-mechanical properties of fiber and matrix, fiber and void volume ratios, use temperature, and pressure. The probabilistic structural analysis and probabilistic strength results are used by IPACS to perform reliability and risk evaluation of the two structures. The results will show that the sensitivity information obtained for the two composite structures from the computational simulation can be used to alter the design process to meet desired service requirements. In addition to detailed probabilistic analysis of the two structures, the following were performed specifically on the CMC tube: (1) predicted the failure load and the buckling load, (2) performed coupled non-deterministic multi-disciplinary structural analysis, and (3) demonstrated that probabilistic sensitivities can be used to select a reduced set of design variables for optimization.
Prediction of coefficients of thermal expansion for unidirectional composites
NASA Technical Reports Server (NTRS)
Bowles, David E.; Tompkins, Stephen S.
1989-01-01
Several analyses for predicting the longitudinal, alpha(1), and transverse, alpha(2), coefficients of thermal expansion of unidirectional composites were compared with each other, and with experimental data on different graphite fiber reinforced resin, metal, and ceramic matrix composites. Analytical and numerical analyses that accurately accounted for Poisson restraining effects in the transverse direction were in consistently better agreement with experimental data for alpha(2), than the less rigorous analyses. All of the analyses predicted similar values of alpha(1), and were in good agreement with the experimental data. A sensitivity analysis was conducted to determine the relative influence of constituent properties on the predicted values of alpha(1), and alpha(2). As would be expected, the prediction of alpha(1) was most sensitive to longitudinal fiber properties and the prediction of alpha(2) was most sensitive to matrix properties.
Sawaya, Helen; Atoui, Mia; Hamadeh, Aya; Zeinoun, Pia; Nahas, Ziad
2016-05-30
The Patient Health Questionnaire - 9 (PHQ-9) and Generalized Anxiety Disorder - 7 (GAD-7) are short screening measures used in medical and community settings to assess depression and anxiety severity. The aim of this study is to translate the screening tools into Arabic and evaluate their psychometric properties in an Arabic-speaking Lebanese psychiatric outpatient sample. The patients completed the questionnaires, among others, prior to being evaluated by a clinical psychiatrist or psychologist. The scales' internal consistency and factor structure were measured and convergent and discriminant validity were established by comparing the scores with clinical diagnoses and the Psychiatric Diagnostic Screening Questionnaire - MDD subset (PDSQ - MDD). Results showed that the PHQ-9 and GAD-7 are reliable screening tools for depression and anxiety and their factor structures replicated those reported in the literature. Sensitivity and specificity analyses showed that the PHQ-9 is sensitive but not specific at capturing depressive symptoms when compared to clinician diagnoses whereas the GAD-7 was neither sensitive nor specific at capturing anxiety symptoms. The implications of these findings are discussed in reference to the scales themselves and the cultural specificity of the Lebanese population. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Economic evaluation in chronic pain: a systematic review and de novo flexible economic model.
Sullivan, W; Hirst, M; Beard, S; Gladwell, D; Fagnani, F; López Bastida, J; Phillips, C; Dunlop, W C N
2016-07-01
There is unmet need in patients suffering from chronic pain, yet innovation may be impeded by the difficulty of justifying economic value in a field beset by data limitations and methodological variability. A systematic review was conducted to identify and summarise the key areas of variability and limitations in modelling approaches in the economic evaluation of treatments for chronic pain. The results of the literature review were then used to support the development of a fully flexible open-source economic model structure, designed to test structural and data assumptions and act as a reference for future modelling practice. The key model design themes identified from the systematic review included: time horizon; titration and stabilisation; number of treatment lines; choice/ordering of treatment; and the impact of parameter uncertainty (given reliance on expert opinion). Exploratory analyses using the model to compare a hypothetical novel therapy versus morphine as first-line treatments showed cost-effectiveness results to be sensitive to structural and data assumptions. Assumptions about the treatment pathway and choice of time horizon were key model drivers. Our results suggest structural model design and data assumptions may have driven previous cost-effectiveness results and ultimately decisions based on economic value. We therefore conclude that it is vital that future economic models in chronic pain are designed to be fully transparent and hope our open-source code is useful in order to aspire to a common approach to modelling pain that includes robust sensitivity analyses to test structural and parameter uncertainty.
Turgeon, Ricky D; Wilby, Kyle J; Ensom, Mary H H
2015-06-01
We conducted a systematic review with meta-analysis to evaluate the efficacy of antiviral agents on complete recovery of Bell's palsy. We searched CENTRAL, Embase, MEDLINE, International Pharmaceutical Abstracts, and sources of unpublished literature to November 1, 2014. Primary and secondary outcomes were complete and satisfactory recovery, respectively. To evaluate statistical heterogeneity, we performed subgroup analysis of baseline severity of Bell's palsy and between-study sensitivity analyses based on risk of allocation and detection bias. The 10 included randomized controlled trials (2419 patients; 807 with severe Bell's palsy at onset) had variable risk of bias, with 9 trials having a high risk of bias in at least 1 domain. Complete recovery was not statistically significantly greater with antiviral use versus no antiviral use in the random-effects meta-analysis of 6 trials (relative risk, 1.06; 95% confidence interval, 0.97-1.16; I(2) = 65%). Conversely, random-effects meta-analysis of 9 trials showed a statistically significant difference in satisfactory recovery (relative risk, 1.10; 95% confidence interval, 1.02-1.18; I(2) = 63%). Response to antiviral agents did not differ visually or statistically between patients with severe symptoms at baseline and those with milder disease (test for interaction, P = .11). Sensitivity analyses did not show a clear effect of bias on outcomes. Antiviral agents are not efficacious in increasing the proportion of patients with Bell's palsy who achieved complete recovery, regardless of baseline symptom severity. Copyright © 2015 Elsevier Inc. All rights reserved.
Loudon, B; Smith, M P
2005-08-01
Acute haemorrhage requiring large volume transfusion presents a costly and unpredictable risk to transfusion services. Recombinant factor VIIa (rFVIIa) (NovoSeven, Novo Nordisk, Bagsvaard, Denmark) may provide an important adjunctive haemostatic strategy for the management of patients requiring large volume blood transfusions. To review blood transfusion over a 12-month period and assess the major costs associated with haemorrhage management. A pharmoeconomic evaluation of rFVIIa intervention for large volume transfusion was conducted to identify the most cost-effective strategy for using this haemostatic product. Audit and analysis of all patients admitted to Christchurch Public Hospital requiring > 5 units of red blood cells (RBC) during a single transfusion episode. Patients were stratified into groups dependent on RBC units received and further stratified with regard to ward category. Cumulative costs were derived to compare standard treatment with an hypothesized rFVIIa intervention for each transfusion group. Sensitivity analyses were performed by varying parameters and comparing to original outcomes. Comparison of costs between the standard and hypothetical model indicated no statistically significant differences between groups (P < 0.05). Univariate and multivariate sensitivity analyses indicate that intervention with rFVIIa after transfusion of 14 RBC units may be cost-effective due to conservation of blood components and reduction in duration of intensive area stay. Intervention with rFVIIa for haemorrhage control is most cost-effective relatively early in the RBC transfusion period. Our hypothetical model indicates the optimal time point is when 14 RBC units have been transfused.
Tu, Zhanhai; Xiao, Zebin; Zheng, Yingyan; Huang, Hongjie; Yang, Libin; Cao, Dairong
2018-01-01
Background Little is known about the value of computed tomography (CT) and magnetic resonance imaging (MRI) combined with diffusion-weighted imaging (DWI) in distinguishing malignant from benign skull-involved lesions. Purpose To evaluate the discriminative value of DWI combined with conventional CT and MRI for differentiating between benign and malignant skull-involved lesions. Material and Methods CT and MRI findings of 58 patients with pathologically proven skull-involved lesions (43 benign and 15 malignant) were retrospectively reviewed. Conventional CT and MRI characteristics and apparent diffusion coefficient (ADC) value of the two groups were evaluated and compared. Multivariate logistic regression and receiver operating characteristic (ROC) curve analyses were performed to assess the differential performance of each parameter separately and together. Results The presence of cortical defects or break-through and ill-defined margins were associated with malignant skull-involved lesions (both P < 0.05). Malignant skull-involved lesions demonstrated a significantly lower ADC ( P = 0.016) than benign lesions. ROC curve analyses indicated that a combination of CT, MRI, and DWI with an ADC ≤ 0.703 × 10 -3 mm 2 /s showed optimal sensitivity, while DWI along showed optimal specificity of 88.4% in differentiating between benign and malignant skull-involved lesions. Conclusion The combination of CT, MRI, and DWI can help to differentiate malignant from benign skull-involved lesions. CT + MRI + DWI offers optimal sensitivity, while DWI offers optimal specificity.
Kondo, Masahide; Hoshi, Shu-Ling; Ishiguro, Hiroshi; Toi, Masakazu
2012-06-01
The 70-gene prognosis-signature is validated as a good predictor of recurrence for hormone receptor-positive (ER+), lymph node-negative (LN-), human epidermal growth factor receptor type 2-negative (HER2-) early stage breast cancer (ESBC) in Japanese patient population. Its high cost and potential in avoiding unnecessary adjuvant chemotherapy arouse interest in its economic impact. This study evaluates the cost-effectiveness of including the assay into Japan's social health insurance benefit package. An economic decision tree and Markov model under Japan's health system from the societal perspective is constructed with clinical evidence from the pool analysis of validation studies. One-way sensitivity analyses are also performed. Incremental cost-effectiveness ratio is estimated as ¥3,873,922/quality adjusted life year (QALY) (US$43,044/QALY), which is not more than the suggested social willingness-to-pay for one QALY gain from an innovative medical intervention in Japan, ¥5,000,000/QALY (US$55,556/QALY). However, sensitivity analyses show the instability of this estimation. The introduction of the assay into Japanese practice of ER+, LN-, HER2- ESBC treatment by including it to Japan's social health insurance benefit package has a reasonable chance to be judged as cost-effective and may be justified as an efficient deployment of finite health care resources.
Silveira, Graciele Lurdes; Lima, Maria Gabriela Franco; Reis, Gabriela Barreto Dos; Palmieri, Marcel José; Andrade-Vieria, Larissa Fonseca
2017-07-01
Studies that help understand the mechanisms of action of environmental pollutants are extremely important in environmental toxicology. In this context, assays using plants as models stand out for their simplicity and low performance cost. Among the plants used for this purpose, Allium cepa L. is the model most commonly applied for cytogenotoxic tests, while Lactuca sativa L., already widely used in phytotoxic investigations, has been gaining prominence in cytotoxic analyses. The present study aimed to compare the responses of A. cepa and L. sativa via macroscopic (root growth) and microscopic analyses (cell cycle and DNA fragmentation via TdT-mediated deoxy-uracil nick and labeling (TUNEL) and comet assays) after exposure of their roots to environmental pollutants with known cytogenotoxic mechanisms. Both species presented sensitive and efficient response to the applied tests after exposure to the DNA-alkylating agent Methyl Methanesulfonate (MMS), the heavy metal Cadmium, the aluminum industry waste Spent Potliner (SPL) and the herbicide Atrazine. However, they differed regarding the responses to the evaluated endpoints. Overall, A. cepa was more efficient in detecting clastogenic changes, arising from DNA breakage, while L. sativa rather detected aneugenic alterations, related to chromosome segregation in mitosis. In the tests applied to verify DNA fragmentation (comet and TUNEL assays), A. cepa presented higher sensitivity. In conclusion, both models are efficient to evaluate toxicological risks of environmental pollutants. Copyright © 2017 Elsevier Ltd. All rights reserved.
Effectiveness of a worksite mindfulness-based multi-component intervention on lifestyle behaviors
2014-01-01
Introduction Overweight and obesity are associated with an increased risk of morbidity. Mindfulness training could be an effective strategy to optimize lifestyle behaviors related to body weight gain. The aim of this study was to evaluate the effectiveness of a worksite mindfulness-based multi-component intervention on vigorous physical activity in leisure time, sedentary behavior at work, fruit intake and determinants of these behaviors. The control group received information on existing lifestyle behavior- related facilities that were already available at the worksite. Methods In a randomized controlled trial design (n = 257), 129 workers received a mindfulness training, followed by e-coaching, lunch walking routes and fruit. Outcome measures were assessed at baseline and after 6 and 12 months using questionnaires. Physical activity was also measured using accelerometers. Effects were analyzed using linear mixed effect models according to the intention-to-treat principle. Linear regression models (complete case analyses) were used as sensitivity analyses. Results There were no significant differences in lifestyle behaviors and determinants of these behaviors between the intervention and control group after 6 or 12 months. The sensitivity analyses showed effect modification for gender in sedentary behavior at work at 6-month follow-up, although the main analyses did not. Conclusions This study did not show an effect of a worksite mindfulness-based multi-component intervention on lifestyle behaviors and behavioral determinants after 6 and 12 months. The effectiveness of a worksite mindfulness-based multi-component intervention as a health promotion intervention for all workers could not be established. PMID:24467802
Modeling the atmospheric chemistry of TICs
NASA Astrophysics Data System (ADS)
Henley, Michael V.; Burns, Douglas S.; Chynwat, Veeradej; Moore, William; Plitz, Angela; Rottmann, Shawn; Hearn, John
2009-05-01
An atmospheric chemistry model that describes the behavior and disposition of environmentally hazardous compounds discharged into the atmosphere was coupled with the transport and diffusion model, SCIPUFF. The atmospheric chemistry model was developed by reducing a detailed atmospheric chemistry mechanism to a simple empirical effective degradation rate term (keff) that is a function of important meteorological parameters such as solar flux, temperature, and cloud cover. Empirically derived keff functions that describe the degradation of target toxic industrial chemicals (TICs) were derived by statistically analyzing data generated from the detailed chemistry mechanism run over a wide range of (typical) atmospheric conditions. To assess and identify areas to improve the developed atmospheric chemistry model, sensitivity and uncertainty analyses were performed to (1) quantify the sensitivity of the model output (TIC concentrations) with respect to changes in the input parameters and (2) improve, where necessary, the quality of the input data based on sensitivity results. The model predictions were evaluated against experimental data. Chamber data were used to remove the complexities of dispersion in the atmosphere.
Validity of the French form of the Somatosensory Amplification Scale in a Non-Clinical Sample
Bridou, Morgiane; Aguerre, Colette
2013-01-01
The SomatoSensory Amplification Scale (SSAS) is a 10-item self-report instrument designed to assess the tendency to detect somatic and visceral sensations and experience them as unusually intense, toxic and alarming. This study examines the psychometric properties of a French version of the SSAS in a non-clinical population and, more specifically, explores its construct, convergent and discriminant validities. The SSAS was completed by 375 university students, together with measures of somatization propensity (SCL-90-R somatization subscale) and trait anxiety (STAI Y form). The results of principal component and confirmatory factor analyses suggest that the French version of the SSAS evaluates essentially a single, robust factor (Somatosensory amplification) and two kinds of somatic sensitivity (Exteroceptive sensitivity and Interoceptive sensitivity). Somatosensory amplification correlated with somatization tendency and anxiety propensity. These results encourage further investigations in French of the determinants and consequences of somatosensory amplification, and its use as a therapeutic strategy. PMID:26973888
Streby, Ashleigh; Mull, Bonnie J; Levy, Karen; Hill, Vincent R
2015-05-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Foursuch assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices.
Streby, Ashleigh; Mull, Bonnie J.; Levy, Karen
2015-01-01
Naegleria fowleri is a thermophilic free-living ameba found in freshwater environments worldwide. It is the cause of a rare but potentially fatal disease in humans known as primary amebic meningoencephalitis. Established N. fowleri detection methods rely on conventional culture techniques and morphological examination followed by molecular testing. Multiple alternative real-time PCR assays have been published for rapid detection of Naegleria spp. and N. fowleri. Four such assays were evaluated for the detection of N. fowleri from surface water and sediment. The assays were compared for thermodynamic stability, analytical sensitivity and specificity, detection limits, humic acid inhibition effects, and performance with seeded environmental matrices. Twenty-one ameba isolates were included in the DNA panel used for analytical sensitivity and specificity analyses. N. fowleri genotypes I and III were used for method performance testing. Two of the real-time PCR assays were determined to yield similar performance data for specificity and sensitivity for detecting N. fowleri in environmental matrices. PMID:25855343
Kossack, Mandy; Juergensen, Lonny; Fuchs, Dieter; Katus, Hugo A.; Hassel, David
2015-01-01
Translucent zebrafish larvae represent an established model to analyze genetics of cardiac development and human cardiac disease. More recently adult zebrafish are utilized to evaluate mechanisms of cardiac regeneration and by benefiting from recent genome editing technologies, including TALEN and CRISPR, adult zebrafish are emerging as a valuable in vivo model to evaluate novel disease genes and specifically validate disease causing mutations and their underlying pathomechanisms. However, methods to sensitively and non-invasively assess cardiac morphology and performance in adult zebrafish are still limited. We here present a standardized examination protocol to broadly assess cardiac performance in adult zebrafish by advancing conventional echocardiography with modern speckle-tracking analyses. This allows accurate detection of changes in cardiac performance and further enables highly sensitive assessment of regional myocardial motion and deformation in high spatio-temporal resolution. Combining conventional echocardiography measurements with radial and longitudinal velocity, displacement, strain, strain rate and myocardial wall delay rates after myocardial cryoinjury permitted to non-invasively determine injury dimensions and to longitudinally follow functional recovery during cardiac regeneration. We show that functional recovery of cryoinjured hearts occurs in three distinct phases. Importantly, the regeneration process after cryoinjury extends far beyond the proposed 45 days described for ventricular resection with reconstitution of myocardial performance up to 180 days post-injury (dpi). The imaging modalities evaluated here allow sensitive cardiac phenotyping and contribute to further establish adult zebrafish as valuable cardiac disease model beyond the larval developmental stage. PMID:25853735
Morris, Ulrika; Ding, Xavier C.; Jovel, Irina; Msellem, Mwinyi I.; Bergman, Daniel; Islam, Atiqul; Ali, Abdullah S.; Polley, Spencer; Gonzalez, Iveth J.; Mårtensson, Andreas; Björkman, Anders
2017-01-01
Background New field applicable diagnostic tools are needed for highly sensitive detection of residual malaria infections in pre-elimination settings. Field performance of a high throughput DNA extraction system for loop mediated isothermal amplification (HTP-LAMP) was therefore evaluated for detecting malaria parasites among asymptomatic individuals in Zanzibar. Methods HTP-LAMP performance was evaluated against real-time PCR on 3008 paired blood samples collected on filter papers in a community-based survey in 2015. Results The PCR and HTP-LAMP determined malaria prevalences were 1.6% (95%CI 1.3–2.4) and 0.7% (95%CI 0.4–1.1), respectively. The sensitivity of HTP-LAMP compared to PCR was 40.8% (CI95% 27.0–55.8) and the specificity was 99.9% (CI95% 99.8–100). For the PCR positive samples, there was no statistically significant difference between the geometric mean parasite densities among the HTP-LAMP positive (2.5 p/μL, range 0.2–770) and HTP-LAMP negative (1.4 p/μL, range 0.1–7) samples (p = 0.088). Two lab technicians analysed up to 282 samples per day and the HTP-LAMP method was experienced as user friendly. Conclusions Although field applicable, this high throughput format of LAMP as used here was not sensitive enough to be recommended for detection of asymptomatic low-density infections in areas like Zanzibar, approaching malaria elimination. PMID:28095434
Aydin-Schmidt, Berit; Morris, Ulrika; Ding, Xavier C; Jovel, Irina; Msellem, Mwinyi I; Bergman, Daniel; Islam, Atiqul; Ali, Abdullah S; Polley, Spencer; Gonzalez, Iveth J; Mårtensson, Andreas; Björkman, Anders
2017-01-01
New field applicable diagnostic tools are needed for highly sensitive detection of residual malaria infections in pre-elimination settings. Field performance of a high throughput DNA extraction system for loop mediated isothermal amplification (HTP-LAMP) was therefore evaluated for detecting malaria parasites among asymptomatic individuals in Zanzibar. HTP-LAMP performance was evaluated against real-time PCR on 3008 paired blood samples collected on filter papers in a community-based survey in 2015. The PCR and HTP-LAMP determined malaria prevalences were 1.6% (95%CI 1.3-2.4) and 0.7% (95%CI 0.4-1.1), respectively. The sensitivity of HTP-LAMP compared to PCR was 40.8% (CI95% 27.0-55.8) and the specificity was 99.9% (CI95% 99.8-100). For the PCR positive samples, there was no statistically significant difference between the geometric mean parasite densities among the HTP-LAMP positive (2.5 p/μL, range 0.2-770) and HTP-LAMP negative (1.4 p/μL, range 0.1-7) samples (p = 0.088). Two lab technicians analysed up to 282 samples per day and the HTP-LAMP method was experienced as user friendly. Although field applicable, this high throughput format of LAMP as used here was not sensitive enough to be recommended for detection of asymptomatic low-density infections in areas like Zanzibar, approaching malaria elimination.
Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours.
Garg, Sugandha; Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran
2017-08-01
IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
2014-01-01
Background People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Methods Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. Results The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Conclusions Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA. PMID:24597710
Timmermans, Erik J; van der Pas, Suzan; Schaap, Laura A; Sánchez-Martínez, Mercedes; Zambon, Sabina; Peter, Richard; Pedersen, Nancy L; Dennison, Elaine M; Denkinger, Michael; Castell, Maria Victoria; Siviero, Paola; Herbolsheimer, Florian; Edwards, Mark H; Otero, Angel; Deeg, Dorly J H
2014-03-05
People with osteoarthritis (OA) frequently report that their joint pain is influenced by weather conditions. This study aimed to examine whether there are differences in perceived joint pain between older people with OA who reported to be weather-sensitive versus those who did not in six European countries with different climates and to identify characteristics of older persons with OA that are most predictive of perceived weather sensitivity. Baseline data from the European Project on OSteoArthritis (EPOSA) were used. ACR classification criteria were used to determine OA. Participants with OA were asked about their perception of weather as influencing their pain. Using a two-week follow-up pain calendar, average self-reported joint pain was assessed (range: 0 (no pain)-10 (greatest pain intensity)). Linear regression analyses, logistic regression analyses and an independent t-test were used. Analyses were adjusted for several confounders. The majority of participants with OA (67.2%) perceived the weather as affecting their pain. Weather-sensitive participants reported more pain than non-weather-sensitive participants (M = 4.1, SD = 2.4 versus M = 3.1, SD = 2.4; p < 0.001). After adjusting for several confounding factors, the association between self-perceived weather sensitivity and joint pain remained present (B = 0.37, p = 0.03). Logistic regression analyses revealed that women and more anxious people were more likely to report weather sensitivity. Older people with OA from Southern Europe were more likely to indicate themselves as weather-sensitive persons than those from Northern Europe. Weather (in)stability may have a greater impact on joint structures and pain perception in people from Southern Europe. The results emphasize the importance of considering weather sensitivity in daily life of older people with OA and may help to identify weather-sensitive older people with OA.
Salivary Pepsin Lacks Sensitivity as a Diagnostic Tool to Evaluate Extraesophageal Reflux Disease.
Dy, Fei; Amirault, Janine; Mitchell, Paul D; Rosen, Rachel
2016-10-01
To determine the sensitivity of salivary pepsin compared with multichannel intraluminal impedance with pH testing (pH-MII), endoscopy, and gastroesophageal reflux disease (GERD) questionnaires. We prospectively recruited 50 children from Boston Children's Hospital who were undergoing pH-MII to evaluate for GERD. The patients completed 24-hour pH-MII testing, completed symptom and quality of life questionnaires, and provided a saliva specimen that was analyzed using the PepTest lateral flow test. A subset of patients also underwent bronchoscopy and esophagogastroduodenoscopy. Receiver operating characteristic curve analyses were performed to determine the sensitivity of salivary pepsin compared with each reference standard. Twenty-one of the 50 patients (42%) were salivary pepsin-positive, with a median salivary pepsin concentration of 10 ng/mL (IQR, 10-55 ng/mL). There was no significant difference in the distributions of acid, nonacid, total reflux episodes, full column reflux, or any other reflux variable in patients who were pepsin-positive compared with those who were pepsin-negative (P > .50). There was no significant correlation between the number of reflux episodes and pepsin concentration (P > .10). There was no positive relationship between salivary pepsin positivity, any extraesophageal symptoms or quality of life scores, or inflammation on bronchoscopy or esophagogastroduodenoscopy (P > .30). Salivary pepsin measurement has a low sensitivity for predicting pathological gastroesophageal reflux in children. Copyright © 2016 Elsevier Inc. All rights reserved.
Galor, Anat; Small, Leslie; Feuer, William; Levitt, Roy C; Sarantopoulos, Konstantinos D; Yosipovitch, Gil
2017-08-01
To evaluate associations between sensations of ocular itch and dry eye (DE) symptoms, including ocular pain, and DE signs. A cross-sectional study of 324 patients seen in the Miami Veterans Affairs eye clinic was performed. The evaluation consisted of questionnaires regarding ocular itch, DE symptoms, descriptors of neuropathic-like ocular pain (NOP), and evoked pain sensitivity testing on the forehead and forearm, followed by a comprehensive ocular surface examination including corneal mechanical sensitivity testing. Analyses were performed to examine for differences between those with and without subjective complaints of ocular itch. The mean age was 62 years with 92% being male. Symptoms of DE and NOP were more frequent in patients with moderate-severe ocular itch compared to those with no or mild ocular itch symptoms. With the exception of ocular surface inflammation (abnormal matrix metalloproteinase 9 testing) which was less common in those with moderate-severe ocular itch symptoms, DE signs were not related to ocular itch. Individuals with moderate-severe ocular itch also demonstrated greater sensitivity to evoked pain on the forearm and had higher non-ocular pain, depression, and post-traumatic stress disorders scores, compared to those with no or mild itch symptoms. Subjects with moderate-severe ocular itch symptoms have more severe symptoms of DE, NOP, non-ocular pain and demonstrate abnormal somatosensory testing in the form of increased sensitivity to evoked pain at a site remote from the eye, consistent with generalized hypersensitivity.
Galor, Anat; Small, Leslie; Feuer, William; Levitt, Roy C.; Sarantopoulos, Konstantinos D.; Yosipovitch, Gil
2017-01-01
Purpose To evaluate associations between sensations of ocular itch and dry eye (DE) symptoms, including ocular pain, and DE signs. Methods A cross-sectional study of 324 patients seen in the Miami Veterans Affairs eye clinic was performed. The evaluation consisted of questionnaires regarding ocular itch, DE symptoms, descriptors of neuropathic-like ocular pain (NOP), and evoked pain sensitivity testing on the forehead and forearm, followed by a comprehensive ocular surface examination including corneal mechanical sensitivity testing. Analyses were performed to examine for differences between those with and without subjective complaints of ocular itch. Results The mean age was 62 years with 92% being male. Symptoms of DE and NOP were more frequent in patients with moderate-severe ocular itch compared to those with no or mild ocular itch symptoms. With the exception of ocular surface inflammation (abnormal matrix metalloproteinase 9 testing) which was less common in those with moderate-severe ocular itch symptoms, DE signs were not related to ocular itch. Individuals with moderate-severe ocular itch also demonstrated greater sensitivity to evoked pain on the forearm and had higher non-ocular pain, depression, and post-traumatic stress disorders scores, compared to those with no or mild itch symptoms. Conclusions Subjects with moderate-severe ocular itch symptoms have more severe symptoms of DE, NOP, non-ocular pain and demonstrate abnormal somatosensory testing in the form of increased sensitivity to evoked pain at a site remote from the eye, consistent with generalized hypersensitivity. PMID:29391860
Ni, W; Jiang, Y
2017-02-01
This study used a simulation model to determine the cost-effective threshold of fracture risk to treat osteoporosis among elderly Chinese women. Osteoporosis treatment is cost-effective among average-risk women who are at least 75 years old and above-average-risk women who are younger than 75 years old. Aging of the Chinese population is imposing increasing economic burden of osteoporosis. This study evaluated the cost-effectiveness of osteoporosis treatment among the senior Chinese women population. A discrete event simulation model using age-specific probabilities of hip fracture, clinical vertebral fracture, wrist fracture, humerus fracture, and other fracture; costs (2015 US dollars); and quality-adjusted life years (QALYs) was used to assess the cost-effectiveness of osteoporosis treatment. Incremental cost-effectiveness ratio (ICER) was calculated. The willingness to pay (WTP) for a QALY in China was compared with the calculated ICER to decide the cost-effectiveness. To determine the absolute 10-year hip fracture probability at which the osteoporosis treatment became cost-effective, average age-specific probabilities for all fractures were multiplied by a relative risk (RR) that was systematically varied from 0 to 10 until the WTP threshold was observed for treatment relative to no intervention. Sensitivity analyses were also performed to evaluate the impacts from WTP and annual treatment costs. In baseline analysis, simulated ICERs were higher than the WTP threshold among Chinese women younger than 75, but much lower than the WTP among the older population. Sensitivity analyses indicated that cost-effectiveness could vary due to a higher WTP threshold or a lower annual treatment cost. A 30 % increase in WTP or a 30 % reduction in annual treatment costs will make osteoporosis treatment cost-effective for Chinese women population from 55 to 85. The current study provides evidence that osteoporosis treatment is cost-effective among a subpopulation of Chinese senior women. The results also indicate that the cost-effectiveness of using osteoporosis treatment is sensitive to the WTP threshold and annual treatment costs.
Zeng, Xiaohui; Li, Jianhe; Peng, Liubao; Wang, Yunhua; Tan, Chongqing; Chen, Gannong; Wan, Xiaomin; Lu, Qiong; Yi, Lidan
2014-01-01
Maintenance gefitinib significantly prolonged progression-free survival (PFS) compared with placebo in patients from eastern Asian with locally advanced/metastatic non-small-cell lung cancer (NSCLC) after four chemotherapeutic cycles (21 days per cycle) of first-line platinum-based combination chemotherapy without disease progression. The objective of the current study was to evaluate the cost-effectiveness of maintenance gefitinib therapy after four chemotherapeutic cycle's stand first-line platinum-based chemotherapy for patients with locally advanced or metastatic NSCLC with unknown EGFR mutations, from a Chinese health care system perspective. A semi-Markov model was designed to evaluate cost-effectiveness of the maintenance gefitinib treatment. Two-parametric Weibull and Log-logistic distribution were fitted to PFS and overall survival curves independently. One-way and probabilistic sensitivity analyses were conducted to assess the stability of the model designed. The model base-case analysis suggested that maintenance gefitinib would increase benefits in a 1, 3, 6 or 10-year time horizon, with incremental $184,829, $19,214, $19,328, and $21,308 per quality-adjusted life-year (QALY) gained, respectively. The most sensitive influential variable in the cost-effectiveness analysis was utility of PFS plus rash, followed by utility of PFS plus diarrhoea, utility of progressed disease, price of gefitinib, cost of follow-up treatment in progressed survival state, and utility of PFS on oral therapy. The price of gefitinib is the most significant parameter that could reduce the incremental cost per QALY. Probabilistic sensitivity analysis indicated that the cost-effective probability of maintenance gefitinib was zero under the willingness-to-pay (WTP) threshold of $16,349 (3 × per-capita gross domestic product of China). The sensitivity analyses all suggested that the model was robust. Maintenance gefitinib following first-line platinum-based chemotherapy for patients with locally advanced/metastatic NSCLC with unknown EGFR mutations is not cost-effective. Decreasing the price of gefitinib may be a preferential choice for meeting widely treatment demands in China.
Jussi, Liippo; Lammintausta, Kaija
2009-03-01
Contact sensitization to local anaesthetics is often from topical medicaments. Occupational sensitization to topical anaesthetics may occur in certain occupations. The aim of the study was to analyse the occurrence of contact sensitization to topical anaesthetics in general dermatology patients. Patch testing with topical anaesthetics was carried out in 620 patients. Possible sources of sensitization and the clinical histories of the patients are analysed. Positive patch test reactions to one or more topical anaesthetics were seen in 25/620 patients. Dibucaine reactions were most common (20/25), and lidocaine sensitization was seen in two patients. Six patients had reactions to ester-type and/or amide-type anaesthetics concurrently. Local preparations for perianal conditions were the most common sensitizers. One patient had developed occupational sensitization to procaine with multiple cross-reactions and with concurrent penicillin sensitization from procaine penicillin. Dibucaine-containing perianal medicaments are the major source of contact sensitization to topical anaesthetics. Although sensitization to multiple anaesthetics can be seen, cross-reactions are possible. Contact sensitization to lidocaine is not common, and possible cross-reactions should be determined when reactions to lidocaine are seen. Occupational procaine sensitization from veterinary medicaments is a risk among animal workers.
Performance of the Swedish version of the Revised Piper Fatigue Scale.
Jakobsson, Sofie; Taft, Charles; Östlund, Ulrika; Ahlberg, Karin
2013-12-01
The Revised Piper Fatigue scale is one of the most widely used instruments internationally to assess cancer-related fatigue. The aim of the present study was to evaluate selected psychometric properties of a Swedish version of the RPFS (SPFS). An earlier translation of the SPFS was further evaluated and developed. The new version was mailed to 300 patients undergoing curative radiotherapy. The internal validity was assessed using Principal Axis Factor Analysis with oblimin rotation and multitrait analysis. External validity was examined in relation to the Multidimensional Fatigue Inventory-20 (MFI-20) and in known-groups analyses. Totally 196 patients (response rate = 65%) returned evaluable questionnaires. Principal axis factoring analysis yielded three factors (74% of the variance) rather than four as in the original RPFS. Multitrait analyses confirmed the adequacy of scaling assumptions. Known-groups analyses failed to support the discriminative validity. Concurrent validity was satisfactory. The new Swedish version of the RPFS showed good acceptability, reliability and convergent and- discriminant item-scale validity. Our results converge with other international versions of the RPFS in failing to support the four-dimension conceptual model of the instrument. Hence, RPFS suitability for use in international comparisons may be limited which also may have implications for cross-cultural validity of the newly released 12-item version of the RPFS. Further research on the Swedish version should address reasons for high missing rates for certain items in the subscale of affective meaning, further evaluation of the discriminative validity and assessment of its sensitivity in detecting changes over time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Overview of Sensitivity Analysis and Shape Optimization for Complex Aerodynamic Configurations
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Newman, James C., III; Barnwell, Richard W.; Taylor, Arthur C., III; Hou, Gene J.-W.
1998-01-01
This paper presents a brief overview of some of the more recent advances in steady aerodynamic shape-design sensitivity analysis and optimization, based on advanced computational fluid dynamics. The focus here is on those methods particularly well- suited to the study of geometrically complex configurations and their potentially complex associated flow physics. When nonlinear state equations are considered in the optimization process, difficulties are found in the application of sensitivity analysis. Some techniques for circumventing such difficulties are currently being explored and are included here. Attention is directed to methods that utilize automatic differentiation to obtain aerodynamic sensitivity derivatives for both complex configurations and complex flow physics. Various examples of shape-design sensitivity analysis for unstructured-grid computational fluid dynamics algorithms are demonstrated for different formulations of the sensitivity equations. Finally, the use of advanced, unstructured-grid computational fluid dynamics in multidisciplinary analyses and multidisciplinary sensitivity analyses within future optimization processes is recommended and encouraged.
Eguchi, Hisashi; Shimazu, Akihito; Kawakami, Norito; Inoue, Akiomi; Tsutsumi, Akizumi
2016-08-01
This study investigated the prospective association between source-specific workplace social support and high-sensitivity C-reactive protein (hs-CRP) levels in workers in Japan. We conducted a 1-year prospective cohort study with 1,487 men and 533 women aged 18-65 years. Participants worked at two manufacturing worksites in Japan and were free of major illness. We used multivariable linear regression analyses to evaluate the prospective association between supervisor and coworker support at baseline, and hs-CRP levels at follow-up. We conducted the analyses separately for men and women. For women, high supervisor support at baseline was significantly associated with lower hs-CRP levels at follow-up (β = -0.109, P < 0.01), whereas coworker support at baseline was not significantly associated with hs-CRP levels at follow-up. Associations between supervisor and coworker support and hs-CRP levels were not significant for men. Supervisor support may have beneficial effects on inflammatory markers in working women. Am. J. Ind. Med. 59:676-684, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Cost-Effectiveness Analysis of Regorafenib for Metastatic Colorectal Cancer
Goldstein, Daniel A.; Ahmad, Bilal B.; Chen, Qiushi; Ayer, Turgay; Howard, David H.; Lipscomb, Joseph; El-Rayes, Bassel F.; Flowers, Christopher R.
2015-01-01
Purpose Regorafenib is a standard-care option for treatment-refractory metastatic colorectal cancer that increases median overall survival by 6 weeks compared with placebo. Given this small incremental clinical benefit, we evaluated the cost-effectiveness of regorafenib in the third-line setting for patients with metastatic colorectal cancer from the US payer perspective. Methods We developed a Markov model to compare the cost and effectiveness of regorafenib with those of placebo in the third-line treatment of metastatic colorectal cancer. Health outcomes were measured in life-years and quality-adjusted life-years (QALYs). Drug costs were based on Medicare reimbursement rates in 2014. Model robustness was addressed in univariable and probabilistic sensitivity analyses. Results Regorafenib provided an additional 0.04 QALYs (0.13 life-years) at a cost of $40,000, resulting in an incremental cost-effectiveness ratio of $900,000 per QALY. The incremental cost-effectiveness ratio for regorafenib was > $550,000 per QALY in all of our univariable and probabilistic sensitivity analyses. Conclusion Regorafenib provides minimal incremental benefit at high incremental cost per QALY in the third-line management of metastatic colorectal cancer. The cost-effectiveness of regorafenib could be improved by the use of value-based pricing. PMID:26304904
Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.
2013-01-01
Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the United States. We review inferential methods for partially synthetic data, and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data, and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. PMID:23670983
Lithium in drinking water and suicide mortality.
Kapusta, Nestor D; Mossaheb, Nilufar; Etzersdorfer, Elmar; Hlavin, Gerald; Thau, Kenneth; Willeit, Matthäus; Praschak-Rieder, Nicole; Sonneck, Gernot; Leithner-Dziubas, Katharina
2011-05-01
There is some evidence that natural levels of lithium in drinking water may have a protective effect on suicide mortality. To evaluate the association between local lithium levels in drinking water and suicide mortality at district level in Austria. A nationwide sample of 6460 lithium measurements was examined for association with suicide rates per 100,000 population and suicide standardised mortality ratios across all 99 Austrian districts. Multivariate regression models were adjusted for well-known socioeconomic factors known to influence suicide mortality in Austria (population density, per capita income, proportion of Roman Catholics, as well as the availability of mental health service providers). Sensitivity analyses and weighted least squares regression were used to challenge the robustness of the results. The overall suicide rate (R(2) = 0.15, β = -0.39, t = -4.14, P = 0.000073) as well as the suicide mortality ratio (R(2) = 0.17, β = -0.41, t = -4.38, P = 0.000030) were inversely associated with lithium levels in drinking water and remained significant after sensitivity analyses and adjustment for socioeconomic factors. In replicating and extending previous results, this study provides strong evidence that geographic regions with higher natural lithium concentrations in drinking water are associated with lower suicide mortality rates.
Performance comparison of Islamic and commercial banks in Malaysia
NASA Astrophysics Data System (ADS)
Azizud-din, Azimah; Hussin, Siti Aida Sheikh; Zahid, Zalina
2016-10-01
The steady growth in the size and increase in the number of Islamic banks show that the Islamic banking system is considered as an alternative to the conventional banking system. Due to this, comparisons in term of performance measurements and evaluation of the financial health for both type of banks are essential. The main purpose of this study is to analyse the differences between Islamic and commercial banks performance. Five years secondary data were collected from the annual report for each bank. Return on Asset ratio is chosen as the dependent variable, while capital adequacy, asset quality, management quality, earning, liquidity and sensitivity to market risk (CAMELS) are the independent variables. Descriptive analyses were done to understand the data. The independent t-test and Mann Whitney test show the differences of Islamic and commercial banks based on the financial variables. The stepwise and hierarchical multiple regressions were used to determine the factor that affects profitability performance of banks. Results show that Islamic banks are better in term of profitability performance, earning power performance, liquidity performance and sensitive to market risk. The factors that affect profitability performance are capital adequacy, earning power and liquidity variable.
Emotion-motion interactions in conversion disorder: an FMRI study.
Aybek, Selma; Nicholson, Timothy R; O'Daly, Owen; Zelaya, Fernando; Kanaan, Richard A; David, Anthony S
2015-01-01
To evaluate the neural correlates of implicit processing of negative emotions in motor conversion disorder (CD) patients. An event related fMRI task was completed by 12 motor CD patients and 14 matched healthy controls using standardised stimuli of faces with fearful and sad emotional expressions in comparison to faces with neutral expressions. Temporal changes in the sensitivity to stimuli were also modelled and tested in the two groups. We found increased amygdala activation to negative emotions in CD compared to healthy controls in region of interest analyses, which persisted over time consistent with previous findings using emotional paradigms. Furthermore during whole brain analyses we found significantly increased activation in CD patients in areas involved in the 'freeze response' to fear (periaqueductal grey matter), and areas involved in self-awareness and motor control (cingulate gyrus and supplementary motor area). In contrast to healthy controls, CD patients exhibited increased response amplitude to fearful stimuli over time, suggesting abnormal emotional regulation (failure of habituation / sensitization). Patients with CD also activated midbrain and frontal structures that could reflect an abnormal behavioral-motor response to negative including threatening stimuli. This suggests a mechanism linking emotions to motor dysfunction in CD.
Reliability and performance evaluation of systems containing embedded rule-based expert systems
NASA Technical Reports Server (NTRS)
Beaton, Robert M.; Adams, Milton B.; Harrison, James V. A.
1989-01-01
A method for evaluating the reliability of real-time systems containing embedded rule-based expert systems is proposed and investigated. It is a three stage technique that addresses the impact of knowledge-base uncertainties on the performance of expert systems. In the first stage, a Markov reliability model of the system is developed which identifies the key performance parameters of the expert system. In the second stage, the evaluation method is used to determine the values of the expert system's key performance parameters. The performance parameters can be evaluated directly by using a probabilistic model of uncertainties in the knowledge-base or by using sensitivity analyses. In the third and final state, the performance parameters of the expert system are combined with performance parameters for other system components and subsystems to evaluate the reliability and performance of the complete system. The evaluation method is demonstrated in the context of a simple expert system used to supervise the performances of an FDI algorithm associated with an aircraft longitudinal flight-control system.
Pignata, Maud; Chouaid, Christos; Le Lay, Katell; Luciani, Laura; McConnachie, Ceilidh; Gordon, James; Roze, Stéphane
2017-01-01
Background and aims Lung cancer has the highest mortality rate of all cancers worldwide. Non-small-cell lung cancer (NSCLC) accounts for 85% of all lung cancers and has an extremely poor prognosis. Afatinib is an irreversible ErbB family blocker designed to suppress cellular signaling and inhibit cellular growth and is approved in Europe after platinum-based therapy for squamous NSCLC. The objective of the present analysis was to evaluate the cost-effectiveness of afatinib after platinum-based therapy for squamous NSCLC in France. Methods The study population was based on the LUX-Lung 8 trial that compared afatinib with erlotinib in patients with squamous NSCLC. The analysis was performed from the perspective of all health care funders and affected patients. A partitioned survival model was developed to evaluate cost-effectiveness based on progression-free survival and overall survival in the trial. Life expectancy, quality-adjusted life expectancy and direct costs were evaluated over a 10-year time horizon. Future costs and clinical benefits were discounted at 4% annually. Deterministic and probabilistic sensitivity analyses were performed. Results Model projections indicated that afatinib was associated with greater life expectancy (0.16 years) and quality-adjusted life expectancy (0.094 quality-adjusted life years [QALYs]) than that projected for erlotinib. The total cost of treatment over a 10-year time horizon was higher for afatinib than erlotinib, EUR12,364 versus EUR9,510, leading to an incremental cost-effectiveness ratio of EUR30,277 per QALY gained for afatinib versus erlotinib. Sensitivity analyses showed that the base case findings were stable under variation of a range of model inputs. Conclusion Based on data from the LUX-Lung 8 trial, afatinib was projected to improve clinical outcomes versus erlotinib, with a 97% probability of being cost-effective assuming a willingness to pay of EUR70,000 per QALY gained, after platinum-based therapy in patients with squamous NSCLC in France. PMID:29123418
Chan, B
2015-01-01
Background Functional improvements have been seen in stroke patients who have received an increased intensity of physiotherapy. This requires additional costs in the form of increased physiotherapist time. Objectives The objective of this economic analysis is to determine the cost-effectiveness of increasing the intensity of physiotherapy (duration and/or frequency) during inpatient rehabilitation after stroke, from the perspective of the Ontario Ministry of Health and Long-term Care. Data Sources The inputs for our economic evaluation were extracted from articles published in peer-reviewed journals and from reports from government sources or the Canadian Stroke Network. Where published data were not available, we sought expert opinion and used inputs based on the experts' estimates. Review Methods The primary outcome we considered was cost per quality-adjusted life-year (QALY). We also evaluated functional strength training because of its similarities to physiotherapy. We used a 2-state Markov model to evaluate the cost-effectiveness of functional strength training and increased physiotherapy intensity for stroke inpatient rehabilitation. The model had a lifetime timeframe with a 5% annual discount rate. We then used sensitivity analyses to evaluate uncertainty in the model inputs. Results We found that functional strength training and higher-intensity physiotherapy resulted in lower costs and improved outcomes over a lifetime. However, our sensitivity analyses revealed high levels of uncertainty in the model inputs, and therefore in the results. Limitations There is a high level of uncertainty in this analysis due to the uncertainty in model inputs, with some of the major inputs based on expert panel consensus or expert opinion. In addition, the utility outcomes were based on a clinical study conducted in the United Kingdom (i.e., 1 study only, and not in an Ontario or Canadian setting). Conclusions Functional strength training and higher-intensity physiotherapy may result in lower costs and improved health outcomes. However, these results should be interpreted with caution. PMID:26366241
de Paiva, Paula Pereira; Delcorso, Mariana Cruz; Matheus, Valquíria Aparecida; de Queiroz, Sonia Claudia do Nascimento; Collares-Buzato, Carla Beatriz; Arana, Sarah
2017-01-01
Aim: The aim of this work was to evaluate the sensitivity of Pacu fingerlings (Piaractus mesopotamicus) by measuring the effects of median lethal concentration (LC50) of atrazine (ATZ - 28.58 mg/L) after acute exposure (up to 96 h). Materials and Methods: The fish were exposed to the LC50 of ATZ for 96 h (28.58 mg/L) in a static system. During the experiment, the fingerlings were randomly distributed in four glass tanks (50 L) containing dechlorinated water. Four glass tanks were for the control group, and four were for the ATZ-exposed group (n=4 per glass tank), given a total number of 16 animals tested per group. The genotoxicity was evaluated by micronucleus (MN) test in erythrocytes from peripheral blood. Qualitative and semi-quantitative histopathological analyses, and also ultrastructural study, were applied in liver and kidney samples. Finally, the content of heat shock protein (Hsp70) in the liver was evaluated by the western blotting method. Results: The morphological alterations in the liver, which was associated with increased expression of Hsp70, included nuclear and cytoplasmic vacuolization, cytoplasmic hyaline inclusions, and necrosis. The kidney presented edema and tubular cell degeneration with cytoplasmic hyaline inclusion. The semi-quantitative histopathological analyses indicated that the liver was more sensitive than kidney to ATZ-induced damage. Ultrastructural analysis showed that ATZ caused membrane alterations in several organelles and increased the number of lysosomes in hepatocytes and kidney proximal tubular cells. Nevertheless, no significant difference was observed in MN frequency in erythrocytes comparing treated and control groups., Conclusion: These results indicated that ATZ-induced damage to the kidney and liver function, ATZ at the concentration tested did not induce a significant difference in MN frequency in Pacu erythrocytes comparing treated and control groups, and also that Pacu fingerlings may be a good bioindicator for testing freshwater contamination. PMID:29062187
Oostdam, Nicolette; Bosmans, Judith; Wouters, Maurice G A J; Eekhoff, Elisabeth M W; van Mechelen, Willem; van Poppel, Mireille N M
2012-07-04
The prevalence of gestational diabetes mellitus (GDM) is increasing worldwide. GDM and the risks associated with GDM lead to increased health care costs and losses in productivity. The objective of this study is to evaluate whether the FitFor2 exercise program during pregnancy is cost-effective from a societal perspective as compared to standard care. A randomised controlled trial (RCT) and simultaneous economic evaluation of the FitFor2 program were conducted. Pregnant women at risk for GDM were randomised to an exercise program to prevent high maternal blood glucose (n = 62) or to standard care (n = 59). The exercise program consisted of two sessions of aerobic and strengthening exercises per week. Clinical outcome measures were maternal fasting blood glucose levels, insulin sensitivity and infant birth weight. Quality of life was measured using the EuroQol 5-D and quality-adjusted life-years (QALYs) were calculated. Resource utilization and sick leave data were collected by questionnaires. Data were analysed according to the intention-to-treat principle. Missing data were imputed using multiple imputations. Bootstrapping techniques estimated the uncertainty surrounding the cost differences and incremental cost-effectiveness ratios. There were no statistically significant differences in any outcome measure. During pregnancy, total health care costs and costs of productivity losses were statistically non-significant (mean difference €1308; 95%CI €-229 - €3204). The cost-effectiveness analyses showed that the exercise program was not cost-effective in comparison to the control group for blood glucose levels, insulin sensitivity, infant birth weight or QALYs. The twice-weekly exercise program for pregnant women at risk for GDM evaluated in the present study was not cost-effective compared to standard care. Based on these results, implementation of this exercise program for the prevention of GDM cannot be recommended. NTR1139.
Hsu, Chung-Jen; Jones, Elizabeth G
2017-02-01
This paper performs sensitivity analyses of stopping distance for connected vehicles (CVs) at active highway-rail grade crossings (HRGCs). Stopping distance is the major safety factor at active HRGCs. A sensitivity analysis is performed for each variable in the function of stopping distance. The formulation of stopping distance treats each variable as a probability density function for implementing Monte Carlo simulations. The result of the sensitivity analysis shows that the initial speed is the most sensitive factor to stopping distances of CVs and non-CVs. The safety of CVs can be further improved by the early provision of onboard train information and warnings to reduce the initial speeds. Copyright © 2016 Elsevier Ltd. All rights reserved.
Balancing data sharing requirements for analyses with data sensitivity
Jarnevich, C.S.; Graham, J.J.; Newman, G.J.; Crall, A.W.; Stohlgren, T.J.
2007-01-01
Data sensitivity can pose a formidable barrier to data sharing. Knowledge of species current distributions from data sharing is critical for the creation of watch lists and an early warning/rapid response system and for model generation for the spread of invasive species. We have created an on-line system to synthesize disparate datasets of non-native species locations that includes a mechanism to account for data sensitivity. Data contributors are able to mark their data as sensitive. This data is then 'fuzzed' in mapping applications and downloaded files to quarter-quadrangle grid cells, but the actual locations are available for analyses. We propose that this system overcomes the hurdles to data sharing posed by sensitive data. ?? 2006 Springer Science+Business Media B.V.
The Sensitivity of Genetic Connectivity Measures to Unsampled and Under-Sampled Sites
Koen, Erin L.; Bowman, Jeff; Garroway, Colin J.; Wilson, Paul J.
2013-01-01
Landscape genetic analyses assess the influence of landscape structure on genetic differentiation. It is rarely possible to collect genetic samples from all individuals on the landscape and thus it is important to assess the sensitivity of landscape genetic analyses to the effects of unsampled and under-sampled sites. Network-based measures of genetic distance, such as conditional genetic distance (cGD), might be particularly sensitive to sampling intensity because pairwise estimates are relative to the entire network. We addressed this question by subsampling microsatellite data from two empirical datasets. We found that pairwise estimates of cGD were sensitive to both unsampled and under-sampled sites, and FST, Dest, and deucl were more sensitive to under-sampled than unsampled sites. We found that the rank order of cGD was also sensitive to unsampled and under-sampled sites, but not enough to affect the outcome of Mantel tests for isolation by distance. We simulated isolation by resistance and found that although cGD estimates were sensitive to unsampled sites, by increasing the number of sites sampled the accuracy of conclusions drawn from landscape genetic analyses increased, a feature that is not possible with pairwise estimates of genetic differentiation such as FST, Dest, and deucl. We suggest that users of cGD assess the sensitivity of this measure by subsampling within their own network and use caution when making extrapolations beyond their sampled network. PMID:23409155
Zhu, Hongmei; Zhu, Yanan; Leung, Siu-wai
2016-01-01
Objective The present study aimed to verify the effectiveness of self-monitoring of blood glucose (SMBG) in patients with non-insulin-treated type 2 diabetes (T2D). Methods A comprehensive literature search was conducted in PubMed, Cochrane Library, Web of Science, ScienceDirect and ClinicalTrials.gov from their respective inception dates to 26 October 2015. Eligible randomised controlled trials (RCTs) were included according to prespecified criteria. The quality of the included RCTs was evaluated according to the Cochrane risk of bias tool, and the evidence quality of meta-analyses was assessed by the Grading of Recommendation, Assessment, Development, and Evaluation (GRADE) criteria. A meta-analysis of primary and secondary outcome measures was performed. Sensitivity and subgroup analyses were carried out to evaluate the robustness and heterogeneity of the findings. Begg's and Egger's tests were used to quantify publication biases. Results A total of 15 RCTs, comprising 3383 patients with non-insulin-treated T2D, met the inclusion criteria. The SMBG intervention improved glycated haemoglobin (HbA1c) (mean difference −0.33; 95% CI −0.45 to −0.22; p=3.0730e−8; n=18), body mass index (BMI; −0.65; −1.18 to −0.12; p=0.0164; n=9) and total cholesterol (TC; −0.12; −0.20 to −0.04; p=0.0034; n=8) more effectively than the control in overall effect. The sensitivity analysis revealed little difference in overall effect, indicating the robustness of the results. SMBG moderated HbA1c levels better than the control in all subgroup analyses. Most of the RCTs had high risk of bias in blinding, while the overall quality of evidence for HbA1c was moderate according to the GRADE criteria. Publication bias was moderate for BMI. Conclusions SMBG improved HbA1c levels in the short term (≤6-month follow-up) and long term (≥12-month follow-up) in patients with T2D who were not using insulin. Trial registration number CRD42015019099. PMID:27591016
Dahabreh, Issa J; Trikalinos, Thomas A; Lau, Joseph; Schmid, Christopher H
2017-03-01
To compare statistical methods for meta-analysis of sensitivity and specificity of medical tests (e.g., diagnostic or screening tests). We constructed a database of PubMed-indexed meta-analyses of test performance from which 2 × 2 tables for each included study could be extracted. We reanalyzed the data using univariate and bivariate random effects models fit with inverse variance and maximum likelihood methods. Analyses were performed using both normal and binomial likelihoods to describe within-study variability. The bivariate model using the binomial likelihood was also fit using a fully Bayesian approach. We use two worked examples-thoracic computerized tomography to detect aortic injury and rapid prescreening of Papanicolaou smears to detect cytological abnormalities-to highlight that different meta-analysis approaches can produce different results. We also present results from reanalysis of 308 meta-analyses of sensitivity and specificity. Models using the normal approximation produced sensitivity and specificity estimates closer to 50% and smaller standard errors compared to models using the binomial likelihood; absolute differences of 5% or greater were observed in 12% and 5% of meta-analyses for sensitivity and specificity, respectively. Results from univariate and bivariate random effects models were similar, regardless of estimation method. Maximum likelihood and Bayesian methods produced almost identical summary estimates under the bivariate model; however, Bayesian analyses indicated greater uncertainty around those estimates. Bivariate models produced imprecise estimates of the between-study correlation of sensitivity and specificity. Differences between methods were larger with increasing proportion of studies that were small or required a continuity correction. The binomial likelihood should be used to model within-study variability. Univariate and bivariate models give similar estimates of the marginal distributions for sensitivity and specificity. Bayesian methods fully quantify uncertainty and their ability to incorporate external evidence may be useful for imprecisely estimated parameters. Copyright © 2017 Elsevier Inc. All rights reserved.
Vintzileos, A M; Ananth, C V; Fisher, A J; Smulian, J C; Day-Salvatore, D; Beazoglou, T; Knuppel, R A
1998-11-01
The objective of this study was to perform an economic evaluation of second-trimester genetic ultrasonography for prenatal detection of Down syndrome. More specifically, we sought to determine the following: (1) the diagnostic accuracy requirements (from the cost-benefit point of view) of genetic ultrasonography versus genetic amniocentesis for women at increased risk for fetal Down syndrome and (2) the possible economic impact of second-trimester genetic ultrasonography for the US population on the basis of the ultrasonographic accuracies reported in previously published studies. A cost-benefit equation was developed from the hypothesis that the cost of universal genetic amniocentesis of patients at increased risk for carrying a fetus with Down syndrome should be at least equal to the cost of universal genetic ultrasonography with amniocentesis used only for those with abnormal ultrasonographic results. The main components of the equation included the diagnostic accuracy of genetic ultrasonography (sensitivity and specificity for detecting Down syndrome), the costs of the amniocentesis package and genetic ultrasonography, and the lifetime cost of Down syndrome cases not detected by the genetic ultrasonography. After appropriate manipulation of the equation a graph was constructed, representing the balance between sensitivity and false-positive rate of genetic ultrasonography; this was used to examine the accuracy of previously published studies from the cost-benefit point of view. Sensitivity analyses included individual risks for Down syndrome ranging from 1:261 (risk of a 35-year-old at 18 weeks' gestation) to 1:44 (risk of a 44-year-old at 18 weeks' gestation). This economic evaluation was conducted from the societal perspective. Genetic ultrasonography was found to be economically beneficial only if the overall sensitivity for detecting Down syndrome was >74%. Even then, the cost-benefit ratio depended on the corresponding false-positive rate. Of the 7 published studies that used multiple ultrasonographic markers for genetic ultrasonography, 6 had accuracies compatible with benefits. The required ultrasonographic accuracy (sensitivity and false-positive rate) varied according to the prevalence of Down syndrome in the population tested. The cost-benefit ratio of second-trimester genetic ultrasonography depends on its diagnostic accuracy, and it is beneficial only when its overall sensitivity for Down syndrome is >74%.
New developments in supra-threshold perimetry.
Henson, David B; Artes, Paul H
2002-09-01
To describe a series of recent enhancements to supra-threshold perimetry. Computer simulations were used to develop an improved algorithm (HEART) for the setting of the supra-threshold test intensity at the beginning of a field test, and to evaluate the relationship between various pass/fail criteria and the test's performance (sensitivity and specificity) and how they compare with modern threshold perimetry. Data were collected in optometric practices to evaluate HEART and to assess how the patient's response times can be analysed to detect false positive response errors in visual field test results. The HEART algorithm shows improved performance (reduced between-eye differences) over current algorithms. A pass/fail criterion of '3 stimuli seen of 3-5 presentations' at each test location reduces test/retest variability and combines high sensitivity and specificity. A large percentage of false positive responses can be detected by comparing their latencies to the average response time of a patient. Optimised supra-threshold visual field tests can perform as well as modern threshold techniques. Such tests may be easier to perform for novice patients, compared with the more demanding threshold tests.
Space shuttle orbiter digital data processing system timing sensitivity analysis OFT ascent phase
NASA Technical Reports Server (NTRS)
Lagas, J. J.; Peterka, J. J.; Becker, D. A.
1977-01-01
Dynamic loads were investigated to provide simulation and analysis of the space shuttle orbiter digital data processing system (DDPS). Segments of the ascent test (OFT) configuration were modeled utilizing the information management system interpretive model (IMSIM) in a computerized simulation modeling of the OFT hardware and software workload. System requirements for simulation of the OFT configuration were defined, and sensitivity analyses determined areas of potential data flow problems in DDPS operation. Based on the defined system requirements and these sensitivity analyses, a test design was developed for adapting, parameterizing, and executing IMSIM, using varying load and stress conditions for model execution. Analyses of the computer simulation runs are documented, including results, conclusions, and recommendations for DDPS improvements.
Herrero Babiloni, Alberto; Nixdorf, Donald R; Law, Alan S; Moana-Filho, Estephan J; Shueb, Sarah S; Nguyen, Ruby H; Durham, Justin
2017-01-01
To evaluate the accuracy of a questionnaire modified for the identification of intraoral pain with neuropathic characteristics in a clinical orofacial pain sample population. 136 participants with at least one of four orofacial pain diagnoses (temporomandibular disorders [TMD, n = 41], acute dental pain [ADP, n = 41], trigeminal neuralgia [TN, n = 19], persistent dentoalveolar pain disorder [PDAP, n = 14]) and a group of pain-free controls (n = 21) completed the modified S-LANSS, a previously adapted version of the original questionnaire devised to detected patients suffering from intraoral pain with neuropathic characteristics. Psychometric properties (sensitivity, specificity, positive predictive value [PPV], negative predictive value [NPV]) were calculated in two analyses with two different thresholds: (1) Detection of pain with neuropathic characteristics: PDAP + TN were considered positive, and TMD + ADP + controls were considered negative per gold standard (expert opinion). (2) Detection of PDAP: PDAP was considered positive and TMD + ADP were considered negative per gold standard. For both analyses, target values for adequate sensitivity and specificity were defined as ≥ 80%. For detection of orofacial pain with neuropathic characteristics (PDAP + TN), the modified S-LANSS presented with the most optimistic threshold sensitivity of 52% (95% confidence interval [CI], 34-69), specificity of 70% (95% CI, 60-79), PPV of 35% (95% CI, 22-51), and NPV of 82% (95% CI, 72-89). For detection of PDAP only, with the most optimistic threshold sensitivity was 64% (95% CI, 35-87), specificity 63% (95% CI, 52-74), PPV 23% (95% CI, 11-39) and NPV 91% (95% CI, 81-97). Based on a priori defined criteria, the modified S-LANSS did not show adequate accuracy to detect intraoral pain with neuropathic characteristics in a clinical orofacial pain sample.
An economic evaluation of intravenous versus oral iron supplementation in people on haemodialysis.
Wong, Germaine; Howard, Kirsten; Hodson, Elisabeth; Irving, Michelle; Craig, Jonathan C
2013-02-01
Iron supplementation can be administered either intravenously or orally in patients with chronic kidney disease (CKD) and iron deficiency anaemia, but practice varies widely. The aim of this study was to estimate the health care costs and benefits of parenteral iron compared with oral iron in haemodialysis patients receiving erythropoiesis-stimulating agents (ESAs). Using broad health care funder perspective, a probabilistic Markov model was constructed to compare the cost-effectiveness and cost-utility of parenteral iron therapy versus oral iron for the management of haemodialysis patients with relative iron deficiency. A series of one-way, multi-way and probabilistic sensitivity analyses were conducted to assess the robustness of the model structure and the extent in which the model's assumptions were sensitive to the uncertainties within the input variables. Compared with oral iron, the incremental cost-effectiveness ratios (ICERs) for parenteral iron were $74,760 per life year saved and $34,660 per quality-adjusted life year (QALY) gained. A series of one-way sensitivity analyses show that the ICER is most sensitive to the probability of achieving haemoglobin (Hb) targets using supplemental iron with a consequential decrease in the standard ESA doses and the relative increased risk in all-cause mortality associated with low Hb levels (Hb < 9.0 g/dL). If the willingness-to-pay threshold was set at $50,000/QALY, the proportions of simulations that showed parenteral iron was cost-effective compared with oral iron were over 90%. Assuming that there is an overall increased mortality risk associated with very low Hb level (<9.0 g/dL), using parenteral iron to achieve an Hb target between 9.5 and 12 g/L is cost-effective compared with oral iron therapy among haemodialysis patients with relative iron deficiency.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data.
Kroenke, Candyce H; Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J
2016-03-01
The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women's Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms-one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV-using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this "triangulation." Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data
Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.
2016-01-01
Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243
Monte Carlo simulations for generic granite repository studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Shaoping; Lee, Joon H; Wang, Yifeng
In a collaborative study between Los Alamos National Laboratory (LANL) and Sandia National Laboratories (SNL) for the DOE-NE Office of Fuel Cycle Technologies Used Fuel Disposition (UFD) Campaign project, we have conducted preliminary system-level analyses to support the development of a long-term strategy for geologic disposal of high-level radioactive waste. A general modeling framework consisting of a near- and a far-field submodel for a granite GDSE was developed. A representative far-field transport model for a generic granite repository was merged with an integrated systems (GoldSim) near-field model. Integrated Monte Carlo model runs with the combined near- and farfield transport modelsmore » were performed, and the parameter sensitivities were evaluated for the combined system. In addition, a sub-set of radionuclides that are potentially important to repository performance were identified and evaluated for a series of model runs. The analyses were conducted with different waste inventory scenarios. Analyses were also conducted for different repository radionuelide release scenarios. While the results to date are for a generic granite repository, the work establishes the method to be used in the future to provide guidance on the development of strategy for long-term disposal of high-level radioactive waste in a granite repository.« less
Sreeja, S; Krishnan Nair, C K
2018-02-15
To evaluate the therapeutic efficacy of hypoxic cell-sensitizer Sanazole (SAN) -directed targeting of cytotoxic drug Berberine (BBN) and Iron-oxide nanoparticle (NP) complexes, to solid tumor in Swiss albino mice. NP-BBN-SAN complexes were characterized by FTIR, XRD, TEM and Nano-size analyzer. This complex was orally administered to mice-bearing solid tumor in hind limb. Tumor regression was analysed by measuring tumor volume. Cellular DNA damages were assessed by comet assay. Transcriptional expression of genes related to tumor hypoxia and apoptosis was evaluated by quantitative real-time PCR and morphological changes in tissues were analysed by histopathology. Also levels of antioxidants and tumor markers in tissues and serum biochemical parameters were analysed. Administration of NP-BBN-SAN complexes reduced tumor volume and studies were focussed on the underlying mechanisms. Extensive damage to cellular-DNA; down-regulated transcription of hif-1α, vegf, akt and bcl2; and up-regulated expression of bax and caspases, were observed in tumor. Results on tumor markers, antioxidant-status and serum parameters corroborated the molecular findings. Histopathology of tumor, liver and kidney revealed the therapeutic specificity of NP-BBN-SAN. Thus SAN and NP can be used for specific targeting of drugs, to hypoxic solid tumor, to improve therapeutic efficacy. Copyright © 2017. Published by Elsevier Inc.
Lateral flow urine lipoarabinomannan assay for detecting active tuberculosis in HIV-positive adults.
Shah, Maunank; Hanrahan, Colleen; Wang, Zhuo Yu; Dendukuri, Nandini; Lawn, Stephen D; Denkinger, Claudia M; Steingart, Karen R
2016-05-10
Rapid detection of tuberculosis (TB) among people living with human immunodeficiency virus (HIV) is a global health priority. HIV-associated TB may have different clinical presentations and is challenging to diagnose. Conventional sputum tests have reduced sensitivity in HIV-positive individuals, who have higher rates of extrapulmonary TB compared with HIV-negative individuals. The lateral flow urine lipoarabinomannan assay (LF-LAM) is a new, commercially available point-of-care test that detects lipoarabinomannan (LAM), a lipopolysaccharide present in mycobacterial cell walls, in people with active TB disease. To assess the accuracy of LF-LAM for the diagnosis of active TB disease in HIV-positive adults who have signs and symptoms suggestive of TB (TB diagnosis).To assess the accuracy of LF-LAM as a screening test for active TB disease in HIV-positive adults irrespective of signs and symptoms suggestive of TB (TB screening). We searched the following databases without language restriction on 5 February 2015: the Cochrane Infectious Diseases Group Specialized Register; MEDLINE (PubMed,1966); EMBASE (OVID, from 1980); Science Citation Index Expanded (SCI-EXPANDED, from 1900), Conference Proceedings Citation Index-Science (CPCI-S, from 1900), and BIOSIS Previews (from 1926) (all three using the Web of Science platform; MEDION; LILACS (BIREME, from 1982); SCOPUS (from 1995); the metaRegister of Controlled Trials (mRCT); the search portal of the World Health Organization International Clinical Trials Registry Platform (WHO ICTRP); and ProQuest Dissertations & Theses A&l (from 1861). Eligible study types included randomized controlled trials, cross-sectional studies, and cohort studies that determined LF-LAM accuracy for TB against a microbiological reference standard (culture or nucleic acid amplification test from any body site). A higher quality reference standard was one in which two or more specimen types were evaluated for TB, and a lower quality reference standard was one in which only one specimen type was evaluated for TB. Participants were HIV-positive people aged 15 years and older. Two review authors independently extracted data from each included study using a standardized form. We appraised the quality of studies using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. We evaluated the test at two different cut-offs: (grade 1 or 2, based on the reference card scale of five intensity bands). Most analyses used grade 2, the manufacturer's currently recommended cut-off for positivity. We carried out meta-analyses to estimate pooled sensitivity and specificity using a bivariate random-effects model and estimated the models using a Bayesian approach. We determined accuracy of LF-LAM combined with sputum microscopy or Xpert® MTB/RIF. In addition, we explored the influence of CD4 count on the accuracy estimates. We assessed the quality of the evidence using the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach. We included 12 studies: six studies evaluated LF-LAM for TB diagnosis and six studies evaluated the test for TB screening. All studies were cross-sectional or cohort studies. Studies for TB diagnosis were largely conducted among inpatients (median CD4 range 71 to 210 cells per µL) and studies for TB screening were largely conducted among outpatients (median CD4 range 127 to 437 cells per µL). All studies were conducted in low- or middle-income countries. Only two studies for TB diagnosis (33%) and one study for TB screening (17%) used a higher quality reference standard.LF-LAM for TB diagnosis (grade 2 cut-off): meta-analyses showed median pooled sensitivity and specificity (95% credible interval (CrI)) of 45% (29% to 63%) and 92% (80% to 97%), (five studies, 2313 participants, 35% with TB, low quality evidence). The pooled sensitivity of a combination of LF-LAM and sputum microscopy (either test positive) was 59% (47% to 70%), which represented a 19% (4% to 36%) increase over sputum microscopy alone, while the pooled specificity was 92% (73% to 97%), which represented a 6% (1% to 24%) decrease from sputum microscopy alone (four studies, 1876 participants, 38% with TB). The pooled sensitivity of a combination of LF-LAM and sputum Xpert® MTB/RIF (either test positive) was 75% (61% to 87%) and represented a 13% (1% to 37%) increase over Xpert® MTB/RIF alone. The pooled specificity was 93% (81% to 97%) and represented a 4% (1% to 16%) decrease from Xpert® MTB/RIF alone (three studies, 909 participants, 36% with TB). Pooled sensitivity and specificity of LF-LAM were 56% (41% to 70%) and 90% (81% to 95%) in participants with a CD4 count of less than or equal to 100 cells per µL (five studies, 859 participants, 47% with TB) versus 26% (16% to 46%) and 92% (78% to 97%) in participants with a CD4 count greater than 100 cells per µL (five studies, 1410 participants, 30% with TB).LF-LAM for TB screening (grade 2 cut-off): for individual studies, sensitivity estimates (95% CrI) were 44% (30% to 58%), 28% (16% to 42%), and 0% (0% to 71%) and corresponding specificity estimates were 95% (92% to 97%), 94% (90% to 97%), and 95% (92% to 97%) (three studies, 1055 participants, 11% with TB, very low quality evidence). There were limited data for additional analyses.The main limitations of the review were the use of a lower quality reference standard in most included studies, and the small number of studies and participants included in the analyses. The results should, therefore, be interpreted with caution. We found that LF-LAM has low sensitivity to detect TB in adults living with HIV whether the test is used for diagnosis or screening. For TB diagnosis, the combination of LF-LAM with sputum microscopy suggests an increase in sensitivity for TB compared to either test alone, but with a decrease in specificity. In HIV-positive individuals with low CD4 counts who are seriously ill, LF-LAM may help with the diagnosis of TB.
The local lymph node assay and the assessment of relative potency: status of validation.
Basketter, David A; Gerberick, Frank; Kimber, Ian
2007-08-01
For the prediction of skin sensitization potential, the local lymph node assay (LLNA) is a fully validated alternative to guinea-pig tests. More recently, information from LLNA dose-response analyses has been used to assess the relative potency of skin sensitizing chemicals. These data are then deployed for risk assessment and risk management. In this commentary, the utility and validity of these relative potency measurements are reviewed. It is concluded that the LLNA does provide a valuable assessment of relative sensitizing potency in the form of the estimated concentration of a chemical required to produce a threefold stimulation of draining lymph node cell proliferation compared with concurrent controls (EC3 value) and that all reasonable validation requirements have been addressed successfully. EC3 measurements are reproducible in both intra- and interlaboratory evaluations and are stable over time. It has been shown also, by several independent groups, that EC3 values correlate closely with data on relative human skin sensitization potency. Consequently, the recommendation made here is that LLNA EC3 measurements should now be regarded as a validated method for the determination of the relative potency of skin sensitizing chemicals, a conclusion that has already been reached by a number of independent expert groups.
Gounoue-Kamkumo, Raceline; Nana-Djeunga, Hugues C; Bopda, Jean; Akame, Julie; Tarini, Ann; Kamgno, Joseph
2015-12-23
Diagnostic tools for lymphatic filariasis (LF) elimination programs are useful in mapping the distribution of the disease, delineating areas where mass drug administrations (MDA) are required, and determining when to stop MDA. The prevalence and burden of LF have been drastically reduced following mass treatments, and the evaluation of the performance of circulating filarial antigen (CFA)-based assays was acknowledged to be of high interest in areas with low residual LF endemicity rates after multiple rounds of MDA. The objective of this study was therefore to evaluate the immunochromatographic test (ICT) sensitivity in low endemicity settings and, specifically, in individuals with low intensity of lymphatic filariasis infection. To perform this study, calibrated thick blood smears, ICT and Og4C3 enzyme-linked immunosorbent assay (ELISA) were carried out by night to identify Wuchereria bancrofti microfilarial and circulating filarial antigen carriers. A threshold determination assay regarding ICT and ELISA was performed using serial plasma dilutions from individuals with positive microfilarial counts. All individuals harbouring microfilariae (positive blood films) were detected by ICT and ELISA, but among individuals positive for ELISA, only 35.7 % of them were detected using ICT (Chi square: 4.57; p-value = 0.03), indicating a moderate agreement between both tests (kappa statistics = 0.49). Threshold determination analyses showed that ELISA was still positive at the last plasma dilution with negative ICT result. These findings suggest a loss of sensitivity for ICT in low endemicity settings, especially in people exhibiting low levels of circulating filarial antigen, raising serious concern regarding the monitoring and evaluation procedures in the framework of LF elimination program.
2015-01-01
The rapidly expanding availability of high-resolution mass spectrometry has substantially enhanced the ion-current-based relative quantification techniques. Despite the increasing interest in ion-current-based methods, quantitative sensitivity, accuracy, and false discovery rate remain the major concerns; consequently, comprehensive evaluation and development in these regards are urgently needed. Here we describe an integrated, new procedure for data normalization and protein ratio estimation, termed ICan, for improved ion-current-based analysis of data generated by high-resolution mass spectrometry (MS). ICan achieved significantly better accuracy and precision, and lower false-positive rate for discovering altered proteins, over current popular pipelines. A spiked-in experiment was used to evaluate the performance of ICan to detect small changes. In this study E. coli extracts were spiked with moderate-abundance proteins from human plasma (MAP, enriched by IgY14-SuperMix procedure) at two different levels to set a small change of 1.5-fold. Forty-five (92%, with an average ratio of 1.71 ± 0.13) of 49 identified MAP protein (i.e., the true positives) and none of the reference proteins (1.0-fold) were determined as significantly altered proteins, with cutoff thresholds of ≥1.3-fold change and p ≤ 0.05. This is the first study to evaluate and prove competitive performance of the ion-current-based approach for assigning significance to proteins with small changes. By comparison, other methods showed remarkably inferior performance. ICan can be broadly applicable to reliable and sensitive proteomic survey of multiple biological samples with the use of high-resolution MS. Moreover, many key features evaluated and optimized here such as normalization, protein ratio determination, and statistical analyses are also valuable for data analysis by isotope-labeling methods. PMID:25285707
Eze, Ikenna C.; Hemkens, Lars G.; Bucher, Heiner C.; Hoffmann, Barbara; Schindler, Christian; Künzli, Nino; Schikowski, Tamara
2015-01-01
Background Air pollution is hypothesized to be a risk factor for diabetes. Epidemiological evidence is inconsistent and has not been systematically evaluated. Objectives We systematically reviewed epidemiological evidence on the association between air pollution and diabetes, and synthesized results of studies on type 2 diabetes mellitus (T2DM). Methods We systematically searched electronic literature databases (last search, 29 April 2014) for studies reporting the association between air pollution (particle concentration or traffic exposure) and diabetes (type 1, type 2, or gestational). We systematically evaluated risk of bias and role of potential confounders in all studies. We synthesized reported associations with T2DM in meta-analyses using random-effects models and conducted various sensitivity analyses. Results We included 13 studies (8 on T2DM, 2 on type 1, 3 on gestational diabetes), all conducted in Europe or North America. Five studies were longitudinal, 5 cross-sectional, 2 case–control, and 1 ecologic. Risk of bias, air pollution assessment, and confounder control varied across studies. Dose–response effects were not reported. Meta-analyses of 3 studies on PM2.5 (particulate matter ≤ 2.5 μm in diameter) and 4 studies on NO2 (nitrogen dioxide) showed increased risk of T2DM by 8–10% per 10-μg/m3 increase in exposure [PM2.5: 1.10 (95% CI: 1.02, 1.18); NO2: 1.08 (95% CI: 1.00, 1.17)]. Associations were stronger in females. Sensitivity analyses showed similar results. Conclusion Existing evidence indicates a positive association of air pollution and T2DM risk, albeit there is high risk of bias. High-quality studies assessing dose–response effects are needed. Research should be expanded to developing countries where outdoor and indoor air pollution are high. Citation Eze IC, Hemkens LG, Bucher HC, Hoffmann B, Schindler C, Künzli N, Schilowski T, Probst-Hensch NM. 2015. Association between ambient air pollution and diabetes mellitus in Europe and North America: systematic review and meta-analysis. Environ Health Perspect 123:381–389; http://dx.doi.org/10.1289/ehp.1307823 PMID:25625876
2017-01-01
Objective To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Methods Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. Results VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. Conclusion We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia. PMID:29201816
Park, Chul-Hyun; Kim, Don-Kyu; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Kim, Kunwoo; Park, Jung Ho; Yoon, Kyung Jae
2017-10-01
To compare swallowing function between healthy subjects and patients with pharyngeal dysphagia using high resolution manometry (HRM) and to evaluate the usefulness of HRM for detecting pharyngeal dysphagia. Seventy-five patients with dysphagia and 28 healthy subjects were included in this study. Diagnosis of dysphagia was confirmed by a videofluoroscopy. HRM was performed to measure pressure and timing information at the velopharynx (VP), tongue base (TB), and upper esophageal sphincter (UES). HRM parameters were compared between dysphagia and healthy groups. Optimal threshold values of significant HRM parameters for dysphagia were determined. VP maximal pressure, TB maximal pressure, UES relaxation duration, and UES resting pressure were lower in the dysphagia group than those in healthy group. UES minimal pressure was higher in dysphagia group than in the healthy group. Receiver operating characteristic (ROC) analyses were conducted to validate optimal threshold values for significant HRM parameters to identify patients with pharyngeal dysphagia. With maximal VP pressure at a threshold value of 144.0 mmHg, dysphagia was identified with 96.4% sensitivity and 74.7% specificity. With maximal TB pressure at a threshold value of 158.0 mmHg, dysphagia was identified with 96.4% sensitivity and 77.3% specificity. At a threshold value of 2.0 mmHg for UES minimal pressure, dysphagia was diagnosed at 74.7% sensitivity and 60.7% specificity. Lastly, UES relaxation duration of <0.58 seconds had 85.7% sensitivity and 65.3% specificity, and UES resting pressure of <75.0 mmHg had 89.3% sensitivity and 90.7% specificity for identifying dysphagia. We present evidence that HRM could be a useful evaluation tool for detecting pharyngeal dysphagia.
Cighetti, Giuliana; Bamonti, Fabrizia; Aman, Caroline S; Gregori, Dario; De Giuseppe, Rachele; Novembrino, Cristina; de Liso, Federica; Maiavacca, Rita; Paroni, Rita
2015-01-01
To test the performance of different analytical approaches in highlighting the occurrence of deregulated redox status in various physio-pathological situations. 35 light and 61 heavy smokers, 19 chronic renal failure, 59 kidney transplanted patients, and 87 healthy controls were retrospectively considered for the study. Serum oxidative stress and antioxidant status, assessed by spectrophotometric Reactive Oxygen Metabolites (d-ROMs) and Total Antioxidant Capacity (TAC) tests, respectively, were compared with plasma free (F-MDA) and total (T-MDA) malondialdehyde, both quantified by isotope-dilution-gas chromatography-mass spectrometry (ID-GC-MS). Sensitivity, specificity and cut-off points of T-MDA, F-MDA, d-ROMs and TAC were evaluated by both Receiver Operating Characteristic (ROC) analyses and area under the ROC curve (AUC). Only T-MDA assay showed a clear absence of oxidative stress in controls and significant increase in all patients (AUC 1.00, sensitivity and specificity 100%). Accuracy was good for d-ROMs (AUC 0.87, sensitivity 72.8%, specificity 100%) and F-MDA (AUC 0.82, sensitivity 74.7%, specificity 83.9%), but not high enough for TAC to show in patients impaired antioxidant defense (AUC 0.66, sensitivity 52.0%, specificity 92.9%). This study reveals T-MDA as the best marker to detect oxidative stress, shows the ability of d-ROMs to identify modified oxidative status particularly in the presence of high damages, and evidences the poor TAC performance. d-ROMs and TAC assays could be useful for routine purposes; however, for an accurate clinical data evaluation, their comparison versus a "gold standard method" is required. Copyright © 2014 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Patterson, Dorothy; Begley, Ann; Nolan, Ann
2016-03-01
Facilitating emotional intelligence and insight in midwifery can be challenging, and the purpose of this paper is to illustrate how this can be nurtured through the use of poetry, in particular Seamus Heaney's poem Elegy for a Still Born Child. Students' ability to gain insight into the experience of bereaved parents and achieve an emotional grasp of the situation through vicarious experience were evaluated. Qualitative data from evaluations was content analysed and significant themes emerged. Students' comments clearly support the suggestion that use of this poem has enhanced emotional intelligence. The data also indicates that vicarious experience gained through reading this poem has helped to nurture sensitivity and professional insight into the impact of still birth on a father. Copyright © 2016 Elsevier Ltd. All rights reserved.
Solid state SPS microwave generation and transmission study. Volume 2, phase 2: Appendices
NASA Technical Reports Server (NTRS)
Maynard, O. E.
1980-01-01
The solid state sandwich concept for SPS was further defined. The design effort concentrated on the spacetenna, but did include some system analysis for parametric comparison reasons. Basic solid state microwave devices were defined and modeled. An initial conceptual subsystems and system design was performed as well as sidelobe control and system selection. The selected system concept and parametric solid state microwave power transmission system data were assessed relevant to the SPS concept. Although device efficiency was not a goal, the sensitivities to design of this efficiency were parametrically treated. Sidelobe control consisted of various single step tapers, multistep tapers and Gaussian tapers. A hybrid concept using tubes and solid state was evaluated. Thermal analyses are included with emphasis on sensitivities to waste heat radiator form factor, emissivity, absorptivity, amplifier efficiency, material and junction temperature.
von Kobyletzki, Laura B.; Janson, Staffan; Hasselgren, Mikael; Bornehag, Carl-Gustaf; Svensson, Åke
2012-01-01
Aim. To develop and validate a questionnaire for detecting atopic dermatitis in infants and small children from the age of 2 months. Methods. Parents to 60 children answered a written questionnaire prior to a physical examination and individual semistructured interview. Qualitative and quantitative analyses of validity, sensitivity, specificity, and predictive values of the questionnaire were performed. Results. A total of 27 girls and 33 boys, aged 2 to 71 months, 35 with and 25 without physician-diagnosed eczema, participated. Validation of the questionnaire by comparisons with physicians' diagnoses showed a sensitivity of 0.91 (95% CI 0.77–0.98) and a specificity of 1 (95% CI 0.86–1). Conclusions. Three questions in a parental questionnaire were sufficient for diagnosing eczema in infants and small children. PMID:22500189
Low-hazard metallography of moisture-sensitive electrochemical cells.
Wesolowski, D E; Rodriguez, M A; McKenzie, B B; Papenguth, H W
2011-08-01
A low-hazard approach is presented to prepare metallographic cross-sections of moisture-sensitive battery components. The approach is tailored for evaluation of thermal (molten salt) batteries composed of thin pressed-powder pellets, but has general applicability to other battery electrochemistries. Solution-cast polystyrene is used to encapsulate cells before embedding in epoxy. Nonaqueous grinding and polishing are performed in an industrial dry room to increase throughput. Lapping oil is used as a lubricant throughout grinding. Hexane is used as the solvent throughout processing; occupational exposure levels are well below the limits. Light optical and scanning electron microscopy on cross-sections are used to analyse a thermal battery cell. Spatially resolved X-ray diffraction on oblique angle cut cells complement the metallographic analysis. Published 2011. This article is a US Government work and is in the public domain in the USA.
He, Ye; Lin, Huazhen; Tu, Dongsheng
2018-06-04
In this paper, we introduce a single-index threshold Cox proportional hazard model to select and combine biomarkers to identify patients who may be sensitive to a specific treatment. A penalized smoothed partial likelihood is proposed to estimate the parameters in the model. A simple, efficient, and unified algorithm is presented to maximize this likelihood function. The estimators based on this likelihood function are shown to be consistent and asymptotically normal. Under mild conditions, the proposed estimators also achieve the oracle property. The proposed approach is evaluated through simulation analyses and application to the analysis of data from two clinical trials, one involving patients with locally advanced or metastatic pancreatic cancer and one involving patients with resectable lung cancer. Copyright © 2018 John Wiley & Sons, Ltd.
de Groot, Hans; Patiwael, Jiska A; de Jong, Nicolette W; Burdorf, Alex; van Wijk, Roy Gerth
2013-01-01
To compare the prevalence of sensitization and allergy to natural rubber latex amongst Erasmus Medical Centre (MC) operating theatre employees before and 10 years after the introduction of powder-free latex gloves. Descriptive study. Employees working permanently in the operating theatre were evaluated in 1998 (n = 163) and in 2009 (n = 178) for sensitization and allergies to natural latex by means of questionnaires, serological analyses and skin testing. The prevalence of sensitization and allergies within these 2 groups was then established and compared. The two groups were comparable in terms of gender, smoking habits, job classification, work-related symptoms and the number of individuals who had atopy. In 2009, the prevalence of sensitization to latex was statistically significantly lower than in 1998 (4.5 vs. 14.1%). Allergy to latex was also established a statistically significantly fewer number of times in 2009 than in 1998 (2.8 vs. 9.8%). This same trend could be observed in the subgroup that participated both years (n = 49). Individuals with an atopic constitution had a statistically significant higher risk of developing hypersensitivity to natural latex; the risk of developing an allergy to latex was also higher, but not significantly. After the study in 1998, the introduction of sterile, powder-free latex gloves very likely led to a decline in the prevalence of sensitization and allergy to natural latex in 2009.
Psychobiological Influences on Maternal Sensitivity in the Context of Adversity
Finegood, Eric D.; Blair, Clancy; Granger, Douglas A.; Hibel, Leah C.; Mills-Koonce, Roger
2016-01-01
This study evaluated prospective longitudinal relations among an index of poverty-related cumulative risk, maternal salivary cortisol, child negative affect, and maternal sensitivity across the first two postpartum years. Participants included 1,180 biological mothers residing in rural and predominantly low-income communities in the US. Multilevel growth curve analyses indicated that an index of cumulative risk was positively associated with maternal cortisol across the postpartum (study visits occurring at approximately 7, 15, and 24 months postpartum) over and above effects for African American ethnicity, time of day of saliva collection, age, parity status, having given birth to another child, contraceptive use, tobacco smoking, body mass index, and breastfeeding. Consistent with a psychobiological theory of mothering, maternal salivary cortisol was negatively associated with maternal sensitivity observed during parent-child interactions across the first two postpartum years over and above effects for poverty-related cumulative risk, child negative affect, as well as a large number of covariates associated with cortisol and maternal sensitivity. Child negative affect expressed during parent-child interactions was negatively associated with observed maternal sensitivity at late (24 months) but not early time points of observation (7 months) and cumulative risk was negatively associated with maternal sensitivity across the postpartum and this effect strengthened over time. Results advance our understanding of the dynamic, transactional, and psychobiological influences on parental caregiving behaviors across the first two postpartum years. PMID:27337514
Ciok-Pater, Emilia; Mikucka, Agnieszka; Gospodarek, Eugenia
2005-01-01
Lipophilic species of Corynebacterium are increasing problem in hospital infections. The aim of this study was to evaluate occurrence of these microorganisms in the materials taken from patients in the day of admission and during the hospitalization as well as comparison of their antibiotic sensitivity. The investigation included 65 strains isolated from hospitalized patients and 48 strains isolated from unchanged skin. Using Api Coryne test 5 species were identified. C. urealyticum dominated, the other were C. subsp. lipophilum and C. jeikeium. Among strains isolated from unchanged diseased skin the most C. jeikeium and C. accolens occurred. All strains were sensitive to glycopeptide, quinupristin/dalphopristin. The strains isolated from hospitalized patients were usually sensitive to fuside acid, doxycycline as well as tetracycline. Strains isolated from unchanged skin were sensitive to almost all tested antibiotics. In the group of 65 strains isolated from hospitalized patients 99.0% were multiresistant. In the group of strains isolated from unchanged skin only two strains were multiresistant. Differences in antibiotic sensitivity among analysed Corynebacterium sp. were confirmed. Majority of the "hospital strains" were characterized by multiresistance. Basing on these results it is possible to suppose, that multiresistance is main factor that favours lipophilic Corynebacterium species in the process of colonization of mucous membranes, skins as well as developing infections.
Abscisic acid (ABA) sensitivity regulates desiccation tolerance in germinated Arabidopsis seeds.
Maia, Julio; Dekkers, Bas J W; Dolle, Miranda J; Ligterink, Wilco; Hilhorst, Henk W M
2014-07-01
During germination, orthodox seeds lose their desiccation tolerance (DT) and become sensitive to extreme drying. Yet, DT can be rescued, in a well-defined developmental window, by the application of a mild osmotic stress before dehydration. A role for abscisic acid (ABA) has been implicated in this stress response and in DT re-establishment. However, the path from the sensing of an osmotic cue and its signaling to DT re-establishment is still largely unknown. Analyses of DT, ABA sensitivity, ABA content and gene expression were performed in desiccation-sensitive (DS) and desiccation-tolerant Arabidopsis thaliana seeds. Furthermore, loss and re-establishment of DT in germinated Arabidopsis seeds was studied in ABA-deficient and ABA-insensitive mutants. We demonstrate that the developmental window in which DT can be re-established correlates strongly with the window in which ABA sensitivity is still present. Using ABA biosynthesis and signaling mutants, we show that this hormone plays a key role in DT re-establishment. Surprisingly, re-establishment of DT depends on the modulation of ABA sensitivity rather than enhanced ABA content. In addition, the evaluation of several ABA-insensitive mutants, which can still produce normal desiccation-tolerant seeds, but are impaired in the re-establishment of DT, shows that the acquisition of DT during seed development is genetically different from its re-establishment during germination. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.
Hofer, Florian; Achelrod, Dmitrij; Stargardt, Tom
2016-12-01
Chronic obstructive pulmonary disease (COPD) poses major challenges for health care systems. Previous studies suggest that telemonitoring could be effective in preventing hospitalisations and hence reduce costs. The aim was to evaluate whether telemonitoring interventions for COPD are cost-effective from the perspective of German statutory sickness funds. A cost-utility analysis was conducted using a combination of a Markov model and a decision tree. Telemonitoring as add-on to standard treatment was compared with standard treatment alone. The model consisted of four transition stages to account for COPD severity, and a terminal stage for death. Within each cycle, the frequency of exacerbations as well as outcomes for 2015 costs and quality adjusted life years (QALYs) for each stage were calculated. Values for input parameters were taken from the literature. Deterministic and probabilistic sensitivity analyses were conducted. In the base case, telemonitoring led to an increase in incremental costs (€866 per patient) but also in incremental QALYs (0.05 per patient). The incremental cost-effectiveness ratio (ICER) was thus €17,410 per QALY gained. A deterministic sensitivity analysis showed that hospitalisation rate and costs for telemonitoring equipment greatly affected results. The probabilistic ICER averaged €34,432 per QALY (95 % confidence interval 12,161-56,703). We provide evidence that telemonitoring may be cost-effective in Germany from a payer's point of view. This holds even after deterministic and probabilistic sensitivity analyses.
Shen, Nicole T; Leff, Jared A; Schneider, Yecheskel; Crawford, Carl V; Maw, Anna; Bosworth, Brian; Simon, Matthew S
2017-01-01
Systematic reviews with meta-analyses and meta-regression suggest that timely probiotic use can prevent Clostridium difficile infection (CDI) in hospitalized adults receiving antibiotics, but the cost effectiveness is unknown. We sought to evaluate the cost effectiveness of probiotic use for prevention of CDI versus no probiotic use in the United States. We programmed a decision analytic model using published literature and national databases with a 1-year time horizon. The base case was modeled as a hypothetical cohort of hospitalized adults (mean age 68) receiving antibiotics with and without concurrent probiotic administration. Projected outcomes included quality-adjusted life-years (QALYs), costs (2013 US dollars), incremental cost-effectiveness ratios (ICERs; $/QALY), and cost per infection avoided. One-way, two-way, and probabilistic sensitivity analyses were conducted, and scenarios of different age cohorts were considered. The ICERs less than $100000 per QALY were considered cost effective. Probiotic use dominated (more effective and less costly) no probiotic use. Results were sensitive to probiotic efficacy (relative risk <0.73), the baseline risk of CDI (>1.6%), the risk of probiotic-associated bactermia/fungemia (<0.26%), probiotic cost (<$130), and age (>65). In probabilistic sensitivity analysis, at a willingness-to-pay threshold of $100000/QALY, probiotics were the optimal strategy in 69.4% of simulations. Our findings suggest that probiotic use may be a cost-effective strategy to prevent CDI in hospitalized adults receiving antibiotics age 65 or older or when the baseline risk of CDI exceeds 1.6%.
NASA Astrophysics Data System (ADS)
Borowiak, Klaudia; Zbierska, Janina; Budka, Anna; Kayzer, Dariusz
2014-06-01
Three plant species were assessed in this study - ozone-sensitive and -resistant tobacco, ozone-sensitive petunia and bean. Plants were exposed to ambient air conditions for several weeks in two sites differing in tropospheric ozone concentrations in the growing season of 2009. Every week chlorophyll contents were analysed. Cumulative ozone effects on the chlorophyll content in relation to other meteorological parameters were evaluated using principal component analysis, while the relation between certain days of measurements of the plants were analysed using multivariate analysis of variance. Results revealed variability between plant species response. However, some similarities were noted. Positive relations of all chlorophyll forms to cumulative ozone concentration (AOT 40) were found for all the plant species that were examined. The chlorophyll b/a ratio revealed an opposite position to ozone concentration only in the ozone-resistant tobacco cultivar. In all the plant species the highest average chlorophyll content was noted after the 7th day of the experiment. Afterwards, the plants usually revealed various responses. Ozone-sensitive tobacco revealed decrease of chlorophyll content, and after few weeks of decline again an increase was observed. Probably, due to the accommodation for the stress factor. While during first three weeks relatively high levels of chlorophyll contents were noted in ozone-resistant tobacco. Petunia revealed a slow decrease of chlorophyll content and the lowest values at the end of the experiment. A comparison between the plant species revealed the highest level of chlorophyll contents in ozone-resistant tobacco.
Pre-clinical cognitive phenotypes for Alzheimer disease: a latent profile approach.
Hayden, Kathleen M; Kuchibhatla, Maragatha; Romero, Heather R; Plassman, Brenda L; Burke, James R; Browndyke, Jeffrey N; Welsh-Bohmer, Kathleen A
2014-11-01
Cognitive profiles for pre-clinical Alzheimer disease (AD) can be used to identify groups of individuals at risk for disease and better characterize pre-clinical disease. Profiles or patterns of performance as pre-clinical phenotypes may be more useful than individual test scores or measures of global decline. To evaluate patterns of cognitive performance in cognitively normal individuals to derive latent profiles associated with later onset of disease using a combination of factor analysis and latent profile analysis. The National Alzheimer Coordinating Centers collect data, including a battery of neuropsychological tests, from participants at 29 National Institute on Aging-funded Alzheimer Disease Centers across the United States. Prior factor analyses of this battery demonstrated a four-factor structure comprising memory, attention, language, and executive function. Factor scores from these analyses were used in a latent profile approach to characterize cognition among a group of cognitively normal participants (N = 3,911). Associations between latent profiles and disease outcomes an average of 3 years later were evaluated with multinomial regression models. Similar analyses were used to determine predictors of profile membership. Four groups were identified; each with distinct characteristics and significantly associated with later disease outcomes. Two groups were significantly associated with development of cognitive impairment. In post hoc analyses, both the Trail Making Test Part B, and a contrast score (Delayed Recall - Trails B), significantly predicted group membership and later cognitive impairment. Latent profile analysis is a useful method to evaluate patterns of cognition in large samples for the identification of preclinical AD phenotypes; comparable results, however, can be achieved with very sensitive tests and contrast scores. Copyright © 2014 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Ferko, Nicole; Ferrante, Giuseppe; Hasegawa, James T; Schikorr, Tanya; Soleas, Ireena M; Hernandez, John B; Sabaté, Manel; Kaiser, Christoph; Brugaletta, Salvatore; de la Torre Hernandez, Jose Maria; Galatius, Soeren; Cequier, Angel; Eberli, Franz; de Belder, Adam; Serruys, Patrick W; Valgimigli, Marco
2017-05-01
Second-generation drug eluting stents (DES) may reduce costs and improve clinical outcomes compared to first-generation DES with improved cost-effectiveness when compared to bare metal stents (BMS). We aimed to conduct an economic evaluation of a cobalt-chromium everolimus eluting stent (Co-Cr EES) compared with BMS in percutaneous coronary intervention (PCI). To conduct a cost-effectiveness analysis (CEA) of a cobalt-chromium everolimus eluting stent (Co-Cr EES) versus BMS in PCI. A Markov state transition model with a 2-year time horizon was applied from a US Medicare setting with patients undergoing PCI with Co-Cr EES or BMS. Baseline characteristics, treatment effects, and safety measures were taken from a patient level meta-analysis of 5 RCTs (n = 4,896). The base-case analysis evaluated stent-related outcomes; a secondary analysis considered the broader set of outcomes reported in the meta-analysis. The base-case and secondary analyses reported an additional 0.018 and 0.013 quality-adjusted life years (QALYs) and cost savings of $236 and $288, respectively with Co-Cr EES versus BMS. Results were robust to sensitivity analyses and were most sensitive to the price of clopidogrel. In the probabilistic sensitivity analysis, Co-Cr EES was associated with a greater than 99% chance of being cost saving or cost effective (at a cost per QALY threshold of $50,000) versus BMS. Using data from a recent patient level meta-analysis and contemporary cost data, this analysis found that PCI with Co-Cr EES is more effective and less costly than PCI with BMS. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc. © 2016 The Authors. Catheterization and Cardiovascular Interventions Published by Wiley Periodicals, Inc.
Buoro, Sabrina; Vavassori, Mauro; Pipitone, Silvia; Benegiamo, Anna; Lochis, Eleonora; Fumagalli, Sabina; Falanga, Anna; Marchetti, Marina; Crippa, Alberto; Ottomano, Cosimo; Lippi, Giuseppe
2015-10-01
Current haematology analysers have variable sensitivity and accuracy for counting nucleated red blood cells in samples with low values and in all those conditions characterised by altered sensitivity of red blood cells to the lysing process, such as in beta-thalassaemia or sickle-cell diseases and in neonates. The aim of our study was to evaluate the performance of the automated analyser XE-2100 at counting nucleated red blood cells in the above-mentioned three categories of subjects with potentially altered red blood cell lysis sensitivity and yet a need for accurate nucleated red blood cell counts. We measured nucleated red blood cell count by XE-2100 in peripheral blood samples of 187 subjects comprising 55 patients with beta-thalassaemia (40 major and 15 traits), 26 sickle-cell patients, 56 neonates and 50 normal subject. Results were compared with those obtained by optical microscopy. Agreement between average values of the two methods was estimated by means of Pearson's correlation and bias analysis, whereas diagnostic accuracy was estimated by analysis of receiver operating characteristic curves. The comparison between the two methods showed a Pearson's correlation of 0.99 (95% CI; 0.98-0.99; p<0.001) and bias of -0.61 (95% CI, -1.5-0.3). The area under the curve of the nucleated red blood cell count in all samples was 0.98 (95% CI, 0.96-1.00; p<0.001). Sub-analysis revealed an area under curve of 0.99 (95% CI, 0.98-1.00; p<0.001) for patients with thalassaemia, 0.94 (95% CI, 0.85-1.00; p<0.001) for patients with sickle cell anaemia, and 1.00 (95% CI, 1.0-1.0) for neonates. XE-2100 has excellent performance for nucleated red blood cell counting, especially in critical populations such as patients with haemoglobinopathies and neonates.
Negash, Markos; Kassu, Afework; Amare, Bemnet; Yismaw, Gizachew; Moges, Beyene
2018-01-01
Helicobacter pylori antibody titters fall very slowly even after successful treatment. Therefore, tests detecting H. pylori antibody lack specificity and sensitivity. On the other hand, H. pylori stool antigen tests are reported as an alternative assay because of their reliability and simplicity. However, the comparative performance of H. pylori stool antigen tests for detecting the presence of the bacterium in clinical specimens in the study area is not assessed. Therefore, in this study we evaluated the performance of SD BIOLINE H. pylori Ag rapid test with reference to the commercially available EZ- STEP ELISA and SD BIOLINE H. pylori Ag ELISA tests. Stool samples were collected to analyse the diagnostic performance of SD BIOLINE H. pylori Ag rapid test kit using SD H. pylori Ag ELISA kit and EZ- STEP ELISA tests as a gold standard. Serum samples were also collected from each patient to test for the presence of H. pylori antibodies using dBest H. pylori Test Disk. Sensitivity, specificity, predictive values and kappa value are assessed. P values < 0.05 were taken statistically significant. Stool and serum samples were collected from 201 dyspeptic patients and analysed. The sensitivity, specificity, positive and negative predictive values of the SD BIOLINE H. pylori Ag rapid test were: 95.6% (95% CI, 88.8-98.8), 92.5% (95%CI, 89-94.1%), 86.7% (95% CI, 80.5-89.6), and 97.6% (95% CI, 993.9-99.3) respectively. The performance of SD BIOLINE H. pylori Ag rapid test was better than the currently available antibody test in study area. Therefore, the SD BIOLINE Ag rapid stool test could replace and be used to diagnose active H. pylori infection before the commencement of therapy among dyspeptic patients.
Yin, Zhi; Zou, Jin; Li, Qiongxuan; Chen, Lizhang
2017-04-04
This study is aimed at evaluating the diagnostic value of FIB-4 for liver fibrosis in patients with hepatitis B through a meta-analysis of diagnostic test. We conducted a comprehensive search in the Pubmed, Embase, Web of Science, and Chinese National Knowledge Infrastructure before October 31, 2016. Stata 14.0 software was used for calculation and statistical analyses. We used the sensitivity, specificity, positive and negative likelihood ratio (PLR, NLR), diagnostic odds ratio (DOR) and 95% confidence intervals (CIs) to evaluate the diagnostic value of FIB-4 for liver fibrosis in patients with hepatitis B. Twenty-six studies were included in the final analyses, with a total of 8274 individuals. The pooled parameters are calculated from all studies: sensitivity of 0.69 (95%CI:0.63-0.75), specificity of 0.81 (95%CI: 0.73-0.87), PLR of 3.63 (95%CI:2.66-4.94), NLR of 0.38 (95%CI:0.32-0.44), DOR of 9.57 (95%CI: 6.67-13.74), and area under the curve (AUC) of 0.80 (95%CI: 0.76-0.83). We also conducted subgroup based on the range of cut-off values. Results from subgroup analysis showed that cut-off was the source of heterogeneity in the present study. The sensitivity and specificity of cut-off>2 were 0.69 and 0.95 with the AUC of 0.90 (95%CI: 0.87-0.92). The overall diagnostic value of FIB-4 is not very high for liver fibrosis in patients with hepatitis B. However, the diagnostic value is affected by the cut-off value. FIB-4 has relatively high diagnostic value for detecting liver fibrosis in patients with hepatitis B when the diagnostic threshold value is more than 2.0.
Acute stress selectively reduces reward sensitivity
Berghorst, Lisa H.; Bogdan, Ryan; Frank, Michael J.; Pizzagalli, Diego A.
2013-01-01
Stress may promote the onset of psychopathology by disrupting reward processing. However, the extent to which stress impairs reward processing, rather than incentive processing more generally, is unclear. To evaluate the specificity of stress-induced reward processing disruption, 100 psychiatrically healthy females were administered a probabilistic stimulus selection task (PSST) that enabled comparison of sensitivity to reward-driven (Go) and punishment-driven (NoGo) learning under either “no stress” or “stress” (threat-of-shock) conditions. Cortisol samples and self-report measures were collected. Contrary to hypotheses, the groups did not differ significantly in task performance or cortisol reactivity. However, further analyses focusing only on individuals under “stress” who were high responders with regard to both cortisol reactivity and self-reported negative affect revealed reduced reward sensitivity relative to individuals tested in the “no stress” condition; importantly, these deficits were reward-specific. Overall, findings provide preliminary evidence that stress-reactive individuals show diminished sensitivity to reward, but not punishment, under stress. While such results highlight the possibility that stress-induced anhedonia might be an important mechanism linking stress to affective disorders, future studies are necessary to confirm this conjecture. PMID:23596406
Li, Xin; Kaattari, Stephen L; Vogelbein, Mary A; Vadas, George G; Unger, Michael A
2016-03-01
Immunoassays based on monoclonal antibodies (mAbs) are highly sensitive for the detection of polycyclic aromatic hydrocarbons (PAHs) and can be employed to determine concentrations in near real-time. A sensitive generic mAb against PAHs, named as 2G8, was developed by a three-step screening procedure. It exhibited nearly uniformly high sensitivity against 3-ring to 5-ring unsubstituted PAHs and their common environmental methylated PAHs, with IC 50 values between 1.68-31 μg/L (ppb). 2G8 has been successfully applied on the KinExA Inline Biosensor system for quantifying 3-5 ring PAHs in aqueous environmental samples. PAHs were detected at a concentration as low as 0.2 μg/L. Furthermore, the analyses only required 10 min for each sample. To evaluate the accuracy of the 2G8-based biosensor, the total PAH concentrations in a series of environmental samples analyzed by biosensor and GC-MS were compared. In most cases, the results yielded a good correlation between methods. This indicates that generic antibody 2G8 based biosensor possesses significant promise for a low cost, rapid method for PAH determination in aqueous samples.
Economic Evaluations of Pathology Tests, 2010-2015: A Scoping Review.
Watts, Rory D; Li, Ian W; Geelhoed, Elizabeth A; Sanfilippo, Frank M; St John, Andrew
2017-09-01
Concerns about pathology testing such as the value provided by new tests and the potential for inappropriate utilization have led to a greater need to assess costs and benefits. Economic evaluations are a formal method of analyzing costs and benefits, yet for pathology tests, questions remain about the scope and quality of the economic evidence. To describe the extent and quality of published evidence provided by economic evaluations of pathology tests from 2010 to 2015. Economic evaluations relating to pathology tests from 2010 to 2015 were reviewed. Eight databases were searched for published studies, and details recorded for the country, clinical focus, type of testing, and consideration of sensitivity, specificity, and false test results. The reporting quality of studies was assessed using the Consolidated Health Economic Evaluation Reporting Standards checklist and cost-effectiveness ratios were analyzed for publication bias. We found 356 economic evaluations of pathology tests, most of which regarded developed countries. The most common economic evaluations were cost-utility analyses and the most common clinical focus was infectious diseases. More than half of the studies considered sensitivity and specificity, but few studies considered the impact of false test results. The average Consolidated Health Economic Evaluation Reporting Standards checklist score was 17 out of 24. Cost-utility ratios were commonly less than $10,000/quality-adjusted life-year or more than $200,000/quality-adjusted life-year. The number of economic evaluations of pathology tests has increased in recent years, but the rate of increase has plateaued. Furthermore, the quality of studies in the past 5 years was highly variable, and there is some question of publication bias in reporting cost-effectiveness ratios. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Imširović, Bilal; Zerem, Enver; Efendić, Alma; Mekić Abazović, Alma; Zerem, Omar; Djedović, Muhamed
2018-08-01
Aim To determine capabilities and potential of contrast enhanced magnetic resonance imaging (MRI) enterography in order to establish the diagnosis and to evaluate severity and activity of intestinal inflammation. Methods Fifty-five patients with suspicion for presence of Crohn's disease were evaluated. All patients underwent contrast enhanced MRI enterography and diffusion weighted imaging (DWI), and subsequently endoscopic examination or surgical treatment. Four parameters were analysed: thickening of the bowel wall, and presence of abscess, fistula and lymphadenopathy. Results Comparing results of DWI and contrast enhanced MRI enterography a significant difference between results given through diffusion and histopathological test was found, e.g. a significant difference between results obtained through diffusion and MRI enterography was found. MRI enterography sensitiveness for bowel wall thickening was 97.7% and specificity 70%, whilst DWI sensitivity for bowel wall thickening was 84% and specificity 100%. The diagnostics of abscess and fistula showed no significant difference between DWI and MRI, while in lymphadenopathy significant difference between contrast enhanced MRI enterography and DWI was found. Conclusion Contrast enhanced MRI enterography in combination with DWI allows for excellent evaluation of disease activity, but also problems or complications following it. The examination can be repeated, controlled, and it can contribute to monitoring of patients with this disease. Copyright© by the Medical Assotiation of Zenica-Doboj Canton.
Moloney, Niamh; Beales, Darren; Azoory, Roxanne; Hübscher, Markus; Waller, Robert; Gibbons, Rebekah; Rebbeck, Trudy
2018-06-14
Pain sensitivity and psychosocial issues are prognostic of poor outcome in acute neck disorders. However, knowledge of associations between pain sensitivity and ongoing pain and disability in chronic neck pain are lacking. We aimed to investigate associations of pain sensitivity with pain and disability at the 12-month follow-up in people with chronic neck pain. The predictor variables were: clinical and quantitative sensory testing (cold, pressure); neural tissue sensitivity; neuropathic symptoms; comorbidities; sleep; psychological distress; pain catastrophizing; pain intensity (for the model explaining disability at 12 months only); and disability (for the model explaining pain at 12 months only). Data were analysed using uni- and multivariate regression models to assess associations with pain and disability at the 12-month follow-up (n = 64 at baseline, n = 51 at follow-up). Univariable associations between all predictor variables and pain and disability were evident (r > 0.3; p < 0.05), except for cold and pressure pain thresholds and cold sensitivity. For disability at the 12-month follow-up, 24.0% of the variance was explained by psychological distress and comorbidities. For pain at 12 months, 39.8% of the variance was explained primarily by baseline disability. Neither clinical nor quantitative measures of pain sensitivity were meaningfully associated with long-term patient-reported outcomes in people with chronic neck pain, limiting their clinical application in evaluating prognosis. Copyright © 2018 John Wiley & Sons, Ltd.
Alduraywish, S A; Lodge, C J; Campbell, B; Allen, K J; Erbas, B; Lowe, A J; Dharmage, S C
2016-01-01
There is growing evidence for an increase in food allergies. The question of whether early life food sensitization, a primary step in food allergies, leads to other allergic disease is a controversial but important issue. Birth cohorts are an ideal design to answer this question. We aimed to systematically investigate and meta-analyse the evidence for associations between early food sensitization and allergic disease in birth cohorts. MEDLINE and SCOPUS databases were searched for birth cohorts that have investigated the association between food sensitization in the first 2 years and subsequent wheeze/asthma, eczema and/or allergic rhinitis. We performed meta-analyses using random-effects models to obtain pooled estimates, stratified by age group. The search yielded fifteen original articles representing thirteen cohorts. Early life food sensitization was associated with an increased risk of infantile eczema, childhood wheeze/asthma, eczema and allergic rhinitis and young adult asthma. Meta-analyses demonstrated that early life food sensitization is related to an increased risk of wheeze/asthma (pooled OR 2.9; 95% CI 2.0-4.0), eczema (pooled OR 2.7; 95% CI 1.7-4.4) and allergic rhinitis (pooled OR 3.1; 95% CI 1.9-4.9) from 4 to 8 years. Food sensitization in the first 2 years of life can identify children at high risk of subsequent allergic disease who may benefit from early life preventive strategies. However, due to potential residual confounding in the majority of studies combined with lack of follow-up into adolescence and adulthood, further research is needed. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Zhang, Xinke; Hay, Joel W; Niu, Xiaoli
2015-01-01
The aim of the study was to compare the cost effectiveness of fingolimod, teriflunomide, dimethyl fumarate, and intramuscular (IM) interferon (IFN)-β(1a) as first-line therapies in the treatment of patients with relapsing-remitting multiple sclerosis (RRMS). A Markov model was developed to evaluate the cost effectiveness of disease-modifying drugs (DMDs) from a US societal perspective. The time horizon in the base case was 5 years. The primary outcome was incremental net monetary benefit (INMB), and the secondary outcome was incremental cost-effectiveness ratio (ICER). The base case INMB willingness-to-pay (WTP) threshold was assumed to be US$150,000 per quality-adjusted life year (QALY), and the costs were in 2012 US dollars. One-way sensitivity analyses and probabilistic sensitivity analysis were conducted to test the robustness of the model results. Dimethyl fumarate dominated all other therapies over the range of WTPs, from US$0 to US$180,000. Compared with IM IFN-β(1a), at a WTP of US$150,000, INMBs were estimated at US$36,567, US$49,780, and US$80,611 for fingolimod, teriflunomide, and dimethyl fumarate, respectively. The ICER of fingolimod versus teriflunomide was US$3,201,672. One-way sensitivity analyses demonstrated the model results were sensitive to the acquisition costs of DMDs and the time horizon, but in most scenarios, cost-effectiveness rankings remained stable. Probabilistic sensitivity analysis showed that for more than 90% of the simulations, dimethyl fumarate was the optimal therapy across all WTP values. The three oral therapies were favored in the cost-effectiveness analysis. Of the four DMDs, dimethyl fumarate was a dominant therapy to manage RRMS. Apart from dimethyl fumarate, teriflunomide was the most cost-effective therapy compared with IM IFN-β(1a), with an ICER of US$7,115.
Cost-effectiveness of minimally invasive sacroiliac joint fusion.
Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W
2016-01-01
Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.
Bovine herpesvirus 1: within-herd seroprevalence and antibody levels in bulk-tank milk.
Martínez, S; Yus, E; Sanjuán, M L; Camino, F; Eiras, M C; Arnaiz, I; Diéguez, F J
2016-12-01
The aim of the present study was to establish a relationship between the results obtained with the enzyme-linked immunosorbent assay (ELISA) technique for antibodies (against bovine herpesvirus 1) in serum and those in milk at the herd level. For this purpose, 275 samples of bulk-tank milk were analysed with glycoprotein E (gE) antibody ELISA and 207 more were analysed with glycoprotein B (gB) antibody ELISA (482 in total). All of these samples came from dairy herds whose seroprevalence was also evaluated. The results of this study were then used to analyse the sensitivity of the bulk-tankmilk test in detecting herds with a high risk of active infection (>60% seroprevalence) and its specificity in detecting those with few (<20%) or no seropositive animals. In regard to the reference test (results in blood serum), the sensitivity of the bulk-tankmilk test in detecting herds with >60% seropositive animals was 100% for both gE and gB ELISAs. The specificity figures, for gE and gB ELISAs, respectively, were 88.4% and 99.1% for infection-free herds and 72.6% and 96% for herds with <20% seroprevalence. In a quantitative approach, Pearson's correlation coefficients, reported as a measure of linear association between herd seroprevalences and transformed optical density values recorded in bulk-tank milk, were -0.63 for gE ELISA and 0.67 for gB ELISA. © OIE (World Organisation for Animal Health), 2016.
Santelmann, Hanno; Franklin, Jeremy; Bußhoff, Jana; Baethge, Christopher
2016-10-01
Schizoaffective disorder is a common diagnosis in clinical practice but its nosological status has been subject to debate ever since it was conceptualized. Although it is key that diagnostic reliability is sufficient, schizoaffective disorder has been reported to have low interrater reliability. Evidence based on systematic review and meta-analysis methods, however, is lacking. Using a highly sensitive literature search in Medline, Embase, and PsycInfo we identified studies measuring the interrater reliability of schizoaffective disorder in comparison to schizophrenia, bipolar disorder, and unipolar disorder. Out of 4126 records screened we included 25 studies reporting on 7912 patients diagnosed by different raters. The interrater reliability of schizoaffective disorder was moderate (meta-analytic estimate of Cohen's kappa 0.57 [95% CI: 0.41-0.73]), and substantially lower than that of its main differential diagnoses (difference in kappa between 0.22 and 0.19). Although there was considerable heterogeneity, analyses revealed that the interrater reliability of schizoaffective disorder was consistently lower in the overwhelming majority of studies. The results remained robust in subgroup and sensitivity analyses (e.g., diagnostic manual used) as well as in meta-regressions (e.g., publication year) and analyses of publication bias. Clinically, the results highlight the particular importance of diagnostic re-evaluation in patients diagnosed with schizoaffective disorder. They also quantify a widely held clinical impression of lower interrater reliability and agree with earlier meta-analysis reporting low test-retest reliability. Copyright © 2016. Published by Elsevier B.V.
Cost-effectiveness of minimally invasive sacroiliac joint fusion
Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W
2016-01-01
Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717
Huppertz-Hauss, Gert; Aas, Eline; Lie Høivik, Marte; Langholz, Ebbe; Odes, Selwyn; Småstuen, Milada; Stockbrugger, Reinhold; Hoff, Geir; Moum, Bjørn; Bernklev, Tomm
2016-01-01
Background. The treatment of chronic inflammatory bowel disease (IBD) is costly, and limited resources call for analyses of the cost effectiveness of therapeutic interventions. The present study evaluated the equivalency of the Short Form 6D (SF-6D) and the Euro QoL (EQ-5D), two preference-based HRQoL instruments that are broadly used in cost-effectiveness analyses, in an unselected IBD patient population. Methods. IBD patients from seven European countries were invited to a follow-up visit ten years after their initial diagnosis. Clinical and demographic data were assessed, and the Short Form 36 (SF-36) was employed. Utility scores were obtained by calculating the SF-6D index values from the SF-36 data for comparison with the scores obtained with the EQ-5D questionnaire. Results. The SF-6D and EQ-5D provided good sensitivities for detecting disease activity-dependent utility differences. However, the single-measure intraclass correlation coefficient was 0.58, and the Bland-Altman plot indicated numerous values beyond the limits of agreement. Conclusions. There was poor agreement between the measures retrieved from the EQ-5D and the SF-6D utility instruments. Although both instruments may provide good sensitivity for the detection of disease activity-dependent utility differences, the instruments cannot be used interchangeably. Cost-utility analyses performed with only one utility instrument must be interpreted with caution.
Aas, Eline; Odes, Selwyn; Småstuen, Milada; Stockbrugger, Reinhold; Hoff, Geir; Moum, Bjørn; Bernklev, Tomm
2016-01-01
Background. The treatment of chronic inflammatory bowel disease (IBD) is costly, and limited resources call for analyses of the cost effectiveness of therapeutic interventions. The present study evaluated the equivalency of the Short Form 6D (SF-6D) and the Euro QoL (EQ-5D), two preference-based HRQoL instruments that are broadly used in cost-effectiveness analyses, in an unselected IBD patient population. Methods. IBD patients from seven European countries were invited to a follow-up visit ten years after their initial diagnosis. Clinical and demographic data were assessed, and the Short Form 36 (SF-36) was employed. Utility scores were obtained by calculating the SF-6D index values from the SF-36 data for comparison with the scores obtained with the EQ-5D questionnaire. Results. The SF-6D and EQ-5D provided good sensitivities for detecting disease activity-dependent utility differences. However, the single-measure intraclass correlation coefficient was 0.58, and the Bland-Altman plot indicated numerous values beyond the limits of agreement. Conclusions. There was poor agreement between the measures retrieved from the EQ-5D and the SF-6D utility instruments. Although both instruments may provide good sensitivity for the detection of disease activity-dependent utility differences, the instruments cannot be used interchangeably. Cost-utility analyses performed with only one utility instrument must be interpreted with caution. PMID:27630711
Westerhout, K Y; Verheggen, B G; Schreder, C H; Augustin, M
2012-01-01
An economic evaluation was conducted to assess the outcomes and costs as well as cost-effectiveness of the following grass-pollen immunotherapies: OA (Oralair; Stallergenes S.A., Antony, France) vs GRZ (Grazax; ALK-Abelló, Hørsholm, Denmark), and ALD (Alk Depot SQ; ALK-Abelló) (immunotherapy agents alongside symptomatic medication) and symptomatic treatment alone for grass pollen allergic rhinoconjunctivitis. The costs and outcomes of 3-year treatment were assessed for a period of 9 years using a Markov model. Treatment efficacy was estimated using an indirect comparison of available clinical trials with placebo as a common comparator. Estimates for immunotherapy discontinuation, occurrence of asthma, health state utilities, drug costs, resource use, and healthcare costs were derived from published sources. The analysis was conducted from the insurant's perspective including public and private health insurance payments and co-payments by insurants. Outcomes were reported as quality-adjusted life years (QALYs) and symptom-free days. The uncertainty around incremental model results was tested by means of extensive deterministic univariate and probabilistic multivariate sensitivity analyses. In the base case analysis the model predicted a cost-utility ratio of OA vs symptomatic treatment of €14,728 per QALY; incremental costs were €1356 (95%CI: €1230; €1484) and incremental QALYs 0.092 (95%CI: 0.052; 0.140). OA was the dominant strategy compared to GRZ and ALD, with estimated incremental costs of -€1142 (95%CI: -€1255; -€1038) and -€54 (95%CI: -€188; €85) and incremental QALYs of 0.015 (95%CI: -0.025; 0.056) and 0.027 (95%CI: -0.022; 0.075), respectively. At a willingness-to-pay threshold of €20,000, the probability of OA being the most cost-effective treatment was predicted to be 79%. Univariate sensitivity analyses show that incremental outcomes were moderately sensitive to changes in efficacy estimates. The main study limitation was the requirement of an indirect comparison involving several steps to assess relative treatment effects. The analysis suggests OA to be cost-effective compared to GRZ and ALD, and a symptomatic treatment. Sensitivity analyses showed that uncertainty surrounding treatment efficacy estimates affected the model outcomes.
Losina, Elena; Dervan, Elizabeth E.; Paltiel, A. David; Dong, Yan; Wright, R. John; Spindler, Kurt P.; Mandl, Lisa A.; Jones, Morgan H.; Marx, Robert G.; Safran-Norton, Clare E.; Katz, Jeffrey N.
2015-01-01
Background Arthroscopic partial meniscectomy (APM) is extensively used to relieve pain in patients with symptomatic meniscal tear (MT) and knee osteoarthritis (OA). Recent studies have failed to show the superiority of APM compared to other treatments. We aim to examine whether existing evidence is sufficient to reject use of APM as a cost-effective treatment for MT+OA. Methods We built a patient-level microsimulation using Monte Carlo methods and evaluated three strategies: Physical therapy (‘PT’) alone; PT followed by APM if subjects continued to experience pain (‘Delayed APM’); and ‘Immediate APM’. Our subject population was US adults with symptomatic MT and knee OA over a 10 year time horizon. We assessed treatment outcomes using societal costs, quality-adjusted life years (QALYs), and calculated incremental cost-effectiveness ratios (ICERs), incorporating productivity costs as a sensitivity analysis. We also conducted a value-of-information analysis using probabilistic sensitivity analyses. Results Calculated ICERs were estimated to be $12,900/QALY for Delayed APM as compared to PT and $103,200/QALY for Immediate APM as compared to Delayed APM. In sensitivity analyses, inclusion of time costs made Delayed APM cost-saving as compared to PT. Improving efficacy of Delayed APM led to higher incremental costs and lower incremental effectiveness of Immediate APM in comparison to Delayed APM. Probabilistic sensitivity analyses indicated that PT had 3.0% probability of being cost-effective at a willingness-to-pay (WTP) threshold of $50,000/QALY. Delayed APM was cost effective 57.7% of the time at WTP = $50,000/QALY and 50.2% at WTP = $100,000/QALY. The probability of Immediate APM being cost-effective did not exceed 50% unless WTP exceeded $103,000/QALY. Conclusions We conclude that current cost-effectiveness evidence does not support unqualified rejection of either Immediate or Delayed APM for the treatment of MT+OA. The amount to which society would be willing to pay for additional information on treatment outcomes greatly exceeds the cost of conducting another randomized controlled trial on APM. PMID:26086246
Hayati Rezvan, Panteha; Lee, Katherine J; Simpson, Julie A
2015-04-07
Missing data are common in medical research, which can lead to a loss in statistical power and potentially biased results if not handled appropriately. Multiple imputation (MI) is a statistical method, widely adopted in practice, for dealing with missing data. Many academic journals now emphasise the importance of reporting information regarding missing data and proposed guidelines for documenting the application of MI have been published. This review evaluated the reporting of missing data, the application of MI including the details provided regarding the imputation model, and the frequency of sensitivity analyses within the MI framework in medical research articles. A systematic review of articles published in the Lancet and New England Journal of Medicine between January 2008 and December 2013 in which MI was implemented was carried out. We identified 103 papers that used MI, with the number of papers increasing from 11 in 2008 to 26 in 2013. Nearly half of the papers specified the proportion of complete cases or the proportion with missing data by each variable. In the majority of the articles (86%) the imputed variables were specified. Of the 38 papers (37%) that stated the method of imputation, 20 used chained equations, 8 used multivariate normal imputation, and 10 used alternative methods. Very few articles (9%) detailed how they handled non-normally distributed variables during imputation. Thirty-nine papers (38%) stated the variables included in the imputation model. Less than half of the papers (46%) reported the number of imputations, and only two papers compared the distribution of imputed and observed data. Sixty-six papers presented the results from MI as a secondary analysis. Only three articles carried out a sensitivity analysis following MI to assess departures from the missing at random assumption, with details of the sensitivity analyses only provided by one article. This review outlined deficiencies in the documenting of missing data and the details provided about imputation. Furthermore, only a few articles performed sensitivity analyses following MI even though this is strongly recommended in guidelines. Authors are encouraged to follow the available guidelines and provide information on missing data and the imputation process.
Validation of tungsten cross sections in the neutron energy region up to 100 keV
NASA Astrophysics Data System (ADS)
Pigni, Marco T.; Žerovnik, Gašper; Leal, Luiz. C.; Trkov, Andrej
2017-09-01
Following a series of recent cross section evaluations on tungsten isotopes performed at Oak Ridge National Laboratory (ORNL), this paper presents the validation work carried out to test the performance of the evaluated cross sections based on lead-slowing-down (LSD) benchmarks conducted in Grenoble. ORNL completed the resonance parameter evaluation of four tungsten isotopes - 182,183,184,186W - in August 2014 and submitted it as an ENDF-compatible file to be part of the next release of the ENDF/B-VIII.0 nuclear data library. The evaluations were performed with support from the US Nuclear Criticality Safety Program in an effort to provide improved tungsten cross section and covariance data for criticality safety sensitivity analyses. The validation analysis based on the LSD benchmarks showed an improved agreement with the experimental response when the ORNL tungsten evaluations were included in the ENDF/B-VII.1 library. Comparison with the results obtained with the JEFF-3.2 nuclear data library are also discussed.
Steady-State Thermal-Hydraulics Analyses for the Conversion of BR2 to Low Enriched Uranium Fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Licht, J.; Bergeron, A.; Dionne, B.
The code PLTEMP/ANL version 4.2 was used to perform the steady-state thermal-hydraulic analyses of the BR2 research reactor for conversion from Highly-Enriched to Low Enriched Uranium fuel (HEU and LEU, respectively). Calculations were performed to evaluate different fuel assemblies with respect to the onset of nucleate boiling (ONB), flow instability (FI), critical heat flux (CHF) and fuel temperature at beginning of cycle conditions. The fuel assemblies were characteristic of fresh fuel (0% burnup), highest heat flux (16% burnup), highest power (32% burnup) and highest burnup (46% burnup). Results show that the high heat flux fuel element is limiting for ONB,more » FI, and CHF, for both HEU and LEU fuel, but that the high power fuel element produces similar margin in a few cases. The maximum fuel temperature similarly occurs in both the high heat flux and high power fuel assemblies for both HEU and LEU fuel. A sensitivity study was also performed to evaluate the variation in fuel temperature due to uncertainties in the thermal conductivity degradation associated with burnup.« less
Sleep System Sensitization: Evidence for Changing Roles of Etiological Factors in Insomnia
Kalmbach, David A.; Pillai, Vivek; Arnedt, J. Todd; Anderson, Jason R.; Drake, Christopher L.
2016-01-01
Objectives To test for sensitization of the sleep system in response to insomnia development and major life stress. In addition, to evaluate the impact on depression and anxiety associated with sleep system sensitization. Methods A longitudinal study with three annual assessments. The community-based sample included 262 adults with no history of insomnia or depression who developed insomnia 1 year after baseline (67.6% female; 44.0±13.4y). Measures included the Ford Insomnia Response to Stress Test to assess sleep reactivity, Quick Inventory of Depressive Symptomatology, and Beck Anxiety Inventory. Insomnia classification was based on DSM-IV criteria. Sleep system sensitization was operationally defined as significant increases in sleep reactivity. Results Sensitization of the sleep system was observed from baseline to insomnia onset at 1-y follow-up among insomniacs with low premorbid vulnerability (p<.001), resulting in 68.3% of these individuals re-classified as highly sleep reactive. Major life stress was associated with greater sleep system sensitization (p=.02). Results showed that sleep reactivity at 2-y follow-up remained elevated among those with low premorbid vulnerability, even after insomnia remission (p<.01). Finally, analyses revealed that increases in sleep reactivity predicted greater depression (p<.001) and anxiety (p<.001) at insomnia onset. The impact of sensitization on depression was stable at 2-y follow-up (p=.01). Conclusions Evidence supports sensitization of the sleep system as consequence of insomnia development and major life stress among individuals with low premorbid sleep reactivity. Sleep system sensitization may serve as a mechanism by which insomnia is perpetuated. Harmful effects of the sensitization process may increase risk for insomnia-related depression and anxiety. PMID:27448474
Sleep system sensitization: evidence for changing roles of etiological factors in insomnia.
Kalmbach, David A; Pillai, Vivek; Arnedt, J Todd; Anderson, Jason R; Drake, Christopher L
2016-05-01
To test for sensitization of the sleep system in response to insomnia development and major life stress. In addition, to evaluate the impact on depression and anxiety associated with sleep system sensitization. A longitudinal study with three annual assessments. The community-based sample included 262 adults with no history of insomnia or depression who developed insomnia one year after baseline (67.6% female; 44.0 ± 13.4 yr). Measures included the Ford Insomnia Response to Stress Test to assess sleep reactivity, Quick Inventory of Depressive Symptomatology, and Beck Anxiety Inventory. Insomnia classification was based on DSM-IV criteria. Sleep system sensitization was operationally defined as significant increases in sleep reactivity. Sensitization of the sleep system was observed from baseline to insomnia onset at 1-yr follow-up among insomniacs with low premorbid vulnerability (p < 0.001), resulting in 68.3% of these individuals re-classified as highly sleep reactive. Major life stress was associated with greater sleep system sensitization (p = 0.02). Results showed that sleep reactivity at 2-yr follow-up remained elevated among those with low premorbid vulnerability, even after insomnia remission (p < 0.01). Finally, analyses revealed that increases in sleep reactivity predicted greater depression (p < 0.001) and anxiety (p < 0.001) at insomnia onset. The impact of sensitization on depression was stable at 2-yr follow-up (p = 0.01). Evidence supports sensitization of the sleep system as a consequence of insomnia development and major life stress among individuals with low premorbid sleep reactivity. Sleep system sensitization may serve as a mechanism by which insomnia is perpetuated. Harmful effects of the sensitization process may increase risk for insomnia-related depression and anxiety. Copyright © 2016 Elsevier B.V. All rights reserved.
Jiao, Y.; Lapointe, N.W.R.; Angermeier, P.L.; Murphy, B.R.
2009-01-01
Models of species' demographic features are commonly used to understand population dynamics and inform management tactics. Hierarchical demographic models are ideal for the assessment of non-indigenous species because our knowledge of non-indigenous populations is usually limited, data on demographic traits often come from a species' native range, these traits vary among populations, and traits are likely to vary considerably over time as species adapt to new environments. Hierarchical models readily incorporate this spatiotemporal variation in species' demographic traits by representing demographic parameters as multi-level hierarchies. As is done for traditional non-hierarchical matrix models, sensitivity and elasticity analyses are used to evaluate the contributions of different life stages and parameters to estimates of population growth rate. We applied a hierarchical model to northern snakehead (Channa argus), a fish currently invading the eastern United States. We used a Monte Carlo approach to simulate uncertainties in the sensitivity and elasticity analyses and to project future population persistence under selected management tactics. We gathered key biological information on northern snakehead natural mortality, maturity and recruitment in its native Asian environment. We compared the model performance with and without hierarchy of parameters. Our results suggest that ignoring the hierarchy of parameters in demographic models may result in poor estimates of population size and growth and may lead to erroneous management advice. In our case, the hierarchy used multi-level distributions to simulate the heterogeneity of demographic parameters across different locations or situations. The probability that the northern snakehead population will increase and harm the native fauna is considerable. Our elasticity and prognostic analyses showed that intensive control efforts immediately prior to spawning and/or juvenile-dispersal periods would be more effective (and probably require less effort) than year-round control efforts. Our study demonstrates the importance of considering the hierarchy of parameters in estimating population growth rate and evaluating different management strategies for non-indigenous invasive species. ?? 2009 Elsevier B.V.
Economic evaluation of pneumococcal conjugate vaccination in The Gambia.
Kim, Sun-Young; Lee, Gene; Goldie, Sue J
2010-09-03
Gambia is the second GAVI support-eligible country to introduce the 7-valent pneumococcal conjugate vaccine (PCV7), but a country-specific cost-effectiveness analysis of the vaccine is not available. Our objective was to assess the potential impact of PCVs of different valences in The Gambia. We synthesized the best available epidemiological and cost data using a state-transition model to simulate the natural histories of various pneumococcal diseases. For the base-case, we estimated incremental cost (in 2005 US dollars) per disability-adjusted life year (DALY) averted under routine vaccination using PCV9 compared to no vaccination. We extended the base-case results for PCV9 to estimate the cost-effectiveness of PCV7, PCV10, and PCV13, each compared to no vaccination. To explore parameter uncertainty, we performed both deterministic and probabilistic sensitivity analyses. We also explored the impact of vaccine efficacy waning, herd immunity, and serotype replacement, as a part of the uncertainty analyses, by assuming alternative scenarios and extrapolating empirical results from different settings. Assuming 90% coverage, a program using a 9-valent PCV (PCV9) would prevent approximately 630 hospitalizations, 40 deaths, and 1000 DALYs, over the first 5 years of life of a birth cohort. Under base-case assumptions ($3.5 per vaccine), compared to no intervention, a PCV9 vaccination program would cost $670 per DALY averted in The Gambia. The corresponding values for PCV7, PCV10, and PCV13 were $910, $670, and $570 per DALY averted, respectively. Sensitivity analyses that explored the implications of the uncertain key parameters showed that model outcomes were most sensitive to vaccine price per dose, discount rate, case-fatality rate of primary endpoint pneumonia, and vaccine efficacy against primary endpoint pneumonia. Based on the information available now, infant PCV vaccination would be expected to reduce pneumococcal diseases caused by S. pneumoniae in The Gambia. Assuming a cost-effectiveness threshold of three times GDP per capita, all PCVs examined would be cost-effective at the tentative Advance Market Commitment (AMC) price of $3.5 per dose. Because the cost-effectiveness of a PCV program could be affected by potential serotype replacement or herd immunity effects that may not be known until after a large scale introduction, type-specific surveillance and iterative evaluation will be critical.
Hoshi, Shu-Ling; Kondo, Masahide; Okubo, Ichiro
2017-05-31
The extended use of varicella vaccine in adults aged 50 and older against herpes zoster (HZ) was recently approved in Japan, which has raised the need to evaluate its value for money. We conducted a cost-effectiveness analysis with Markov modelling to evaluate the efficiency of varicella vaccine immunisation programme for the elderly in Japan. Four strategies with different ages to receive a shot of vaccine were set, namely: (1) 65-84, (2) 70-84, (3) 75-84 and (4) 80-84years old (y.o.). Incremental cost-effectiveness ratios (ICERs) compared with no programme from societal perspective were calculated. The health statuses following the target cohort are as follows: without any HZ-related disease, acute HZ followed by recovery, post-herpetic neuralgia (PHN) followed by recovery, post HZ/PHN, and general death. The transition probabilities, utility weights to estimate quality-adjusted life year (QALY) and disease treatment costs were either calculated or cited from literature. Costs of per course of vaccination were assumed at ¥10,000 (US$91). The model with one-year cycle runs until the surviving individual reached 100 y.o. ICERs ranged from ¥2,812,000/US$25,680 to ¥3,644,000/US$33,279 per QALY gained, with 65-84 y.o. strategy having the lowest ICER and 80-84 y.o. strategy the highest. None of the alternatives was strongly dominated by the other, while 80-84 y.o. and 70-84 y.o. strategy were extendedly dominated by 65-84 y.o. Probabilistic sensitivity analyses showed that the probabilities that ICER is under ¥5,000,000/US$45,662 per QALY gained was at 100% for 65-84 y.o., 70-84 y.o., 75-84 y.o. strategy, respectively, and at 98.4% for 80-84 y.o. We found that vaccinating individuals aged 65-84, 70-84, 75-84, and 80-84 with varicella vaccine to prevent HZ-associated disease in Japan can be cost-effective from societal perspective, with 65-84 y.o. strategy as the optimal alternative. Results are supported by one-way sensitivity analyses and probabilistic sensitivity analyses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simvastatin after orthotopic heart transplantation. Costs and consequences.
Krobot, K J; Wenke, K; Reichart, B
1999-03-01
Recent data indicate that the combination of a low cholesterol diet and simvastatin following heart transplantation is associated with significant reduction of serum cholesterol levels, lower incidence of graft vessel disease (GVD) and significantly superior 4-year survival rates than dietary treatment alone. On the basis of this first randomised long term study evaluating survival as the clinical end-point, we investigated the cost effectiveness of the above regimens as well as the long term consequences for the patient and for heart transplantation as a high-tech procedure. The perspective of the economic analysis was that of the German health insurance fund. Life-years gained were calculated on the basis of the Kaplan-Meier survival curves from the 4-year clinical trial and from the International Society for Heart and Lung Transplantation (ISHLT) overall survival statistics. Incremental costs and incremental cost-effectiveness ratios were determined using various sources of data, and both costs and consequences were discounted by 3% per year. Sensitivity analyses using alternative assumptions were conducted in addition to the base-case analysis. As in the original clinical trial, the target population of the economic evaluation comprised all heart transplant recipients on standard triple immunosuppression consisting of cyclosporin, azathioprine and prednisolone, regardless of the postoperative serum lipid profile. The therapeutic regimens investigated in the analysis were the American Heart Association (AHA) step II diet plus simvastatin (titrated to a maximum dosage of 20 mg/day) and AHA step II diet alone. Four years of treatment with simvastatin (mean dosage 8.11 mg/day) translated into an undiscounted survival benefit per patient of 2.27 life-years; 0.64 life-years within the trial period and 1.63 life-years thereafter. Discounted costs per year of life gained were $US1050 (sensitivity analyses $US800 to $US15,400) for simvastatin plus diet versus diet alone and $US18,010 (sensitivity analyses $US17,130 to $US21,090) for heart transplantation plus simvastatin versus no transplantation (all costs reflect 1997 values; $US1 = 1.747 Deutschmarks). Prevention of GVD with simvastatin after heart transplantation was cost effective in all the scenarios examined with impressive prolongation of life expectancy for the heart recipient. Simvastatin also achieved an internationally robust 21% improvement in the cost effectiveness of heart transplantation compared with historical cost-effectiveness data.
Cooper, Andrew J P; Lettis, Sally; Chapman, Charlotte L; Evans, Stephen J W; Waller, Patrick C; Shakir, Saad; Payvandi, Nassrin; Murray, Alison B
2008-05-01
Following the adoption of the ICH E2E guideline, risk management plans (RMP) defining the cumulative safety experience and identifying limitations in safety information are now required for marketing authorisation applications (MAA). A collaborative research project was conducted to gain experience with tools for presenting and evaluating data in the safety specification. This paper presents those tools found to be useful and the lessons learned from their use. Archive data from a successful MAA were utilised. Methods were assessed for demonstrating the extent of clinical safety experience, evaluating the sensitivity of the clinical trial data to detect treatment differences and identifying safety signals from adverse event and laboratory data to define the extent of safety knowledge with the drug. The extent of clinical safety experience was demonstrated by plots of patient exposure over time. Adverse event data were presented using dot plots, which display the percentages of patients with the events of interest, the odds ratio, and 95% confidence interval. Power and confidence interval plots were utilised for evaluating the sensitivity of the clinical database to detect treatment differences. Box and whisker plots were used to display laboratory data. This project enabled us to identify new evidence-based methods for presenting and evaluating clinical safety data. These methods represent an advance in the way safety data from clinical trials can be analysed and presented. This project emphasises the importance of early and comprehensive planning of the safety package, including evaluation of the use of epidemiology data.
Application of High-Temperature Extrinsic Fabry-Perot Interferometer Strain Sensor
NASA Technical Reports Server (NTRS)
Piazza, Anthony
2008-01-01
In this presentation to the NASA Aeronautics Sensor Working Group the application of a strain sensor is outlined. The high-temperature extrinsic Fabry-Perot interferometer (EFPI) strain sensor was developed due to a need for robust strain sensors that operate accurately and reliably beyond 1800 F. Specifically, the new strain sensor would provide data for validating finite element models and thermal-structural analyses. Sensor attachment techniques were also developed to improve methods of handling and protecting the fragile sensors during the harsh installation process. It was determined that thermal sprayed attachments are preferable even though cements are simpler to apply as cements are more prone to bond failure and are often corrosive. Previous thermal/mechanical cantilever beam testing of EFPI yielded very little change to 1200 F, with excellent correlation with SG to 550 F. Current combined thermal/mechanical loading for sensitivity testing is accomplished by a furnace/cantilever beam loading system. Dilatometer testing has can also be used in sensor characterization to evaluate bond integrity, evaluate sensitivity and accuracy and to evaluate sensor-to-sensor scatter, repeatability, hysteresis and drift. Future fiber optic testing will examine single-mode silica EFPIs in a combined thermal/mechanical load fixture on C-C and C-SiC substrates, develop a multi-mode Sapphire strain-sensor, test and evaluate high-temperature fiber Bragg Gratings for use as strain and temperature sensors and attach and evaluate a high-temperature heat flux gauge.
Tran, Linda Diem; Xu, Haiyong; Azocar, Francisca; Ettner, Susan L
2018-05-01
This study examined specialty behavioral health treatment patterns among employer-insured adults in same- and different-gender domestic partnerships and marriages. The study used behavioral health service claims (2008-2013) from Optum to estimate gender-stratified penetration rates of behavioral health service use by couple type and partnership status among partnered adults ages 18-64 (N=12,727,292 person-years) and levels of use among those with any use (conditional analyses). Least-squares, logistic, and zero-truncated negative binomial regression analyses adjusted for age, employer and plan characteristics, and provider supply and for sociodemographic factors in sensitivity analyses. Generalized estimating equations were used to address within-group correlation of adults clustered in employer groups. Both women and men in same-gender marriages or domestic partnerships had higher rates of behavioral health service use, particularly diagnostic evaluation, individual psychotherapy, and medication management, and those in treatment had, on average, more psychotherapy visits than those in different-gender marriages. Behavioral health treatment patterns were similar between women in same-gender domestic partnerships and same-gender marriages, but they diverged between men in same-gender domestic partnerships and same-gender marriages. Moderation analysis results indicated that adults with same-gender partners living in states with fewer legal protections for lesbian, gay, bisexual, and transgender persons were less likely than adults with same-gender partners in LGBT-friendly states to receive behavioral health treatment. Sensitivity analyses did not affect findings. Behavioral health treatment patterns varied by couple type, partnership status, and gender. Results highlight the importance of increasing service acceptability and delivering inclusive, culturally relevant behavioral health treatment for lesbian, gay, and bisexual persons.
Laursen, S B; Leontiadis, G I; Stanley, A J; Hallas, J; Schaffalitzky de Muckadell, O B
2017-08-01
Observational studies have consistently shown an increased risk of upper gastrointestinal bleeding in users of selective serotonin receptor inhibitors (SSRIs), probably explained by their inhibition of platelet aggregation. Therefore, treatment with SSRIs is often temporarily withheld in patients with peptic ulcer bleeding. However, abrupt discontinuation of SSRIs is associated with development of withdrawal symptoms in one-third of patients. Further data are needed to clarify whether treatment with SSRIs is associated with poor outcomes, which would support temporary discontinuation of treatment. To identify if treatment with SSRIs is associated with increased risk of: (1) endoscopy-refractory bleeding, (2) rebleeding or (3) 30-day mortality due to peptic ulcer bleeding. A nationwide cohort study. Analyses were performed on prospectively collected data on consecutive patients admitted to hospital with peptic ulcer bleeding in Denmark in the period 2006-2014. Logistic regression analyses were used to investigate the association between treatment with SSRIs and outcome following adjustment for pre-defined confounders. Sensitivity and subgroup analyses were performed to evaluate the validity of the findings. A total of 14 343 patients were included. Following adjustment, treatment with SSRIs was not associated with increased risk of endoscopy-refractory bleeding (odds ratio [OR] [95% Confidence Interval (CI)]: 1.03 [0.79-1.33]), rebleeding (OR [95% CI]: 0.96 [0.83-1.11]) or 30-day mortality (OR [95% CI]: 1.01 [0.85-1.19]. These findings were supported by sensitivity and subgroup analyses. According to our data, treatment with SSRIs does not influence the risk of endoscopy-refractory bleeding, rebleeding or 30-day mortality in peptic ulcer bleeding. © 2017 John Wiley & Sons Ltd.
Evaluation of the Zeiss retinal vessel analyser
Polak, K.; Dorner, G.; Kiss, B.; Polska, E.; Findl, O.; Rainer, G.; Eichler, H.; Schmetterer, L.
2000-01-01
AIM—To investigate the reproducibility and sensitivity of the Zeiss retinal vessel analyser, a new method for the online determination of retinal vessel diameters in healthy subjects. METHODS—Two model drugs were administered, a peripheral vasoconstrictor (the α receptor agonist phenylephrine) and a peripheral vasodilator (the nitric oxide donor sodium nitroprusside) in stepwise increasing doses. Nine healthy young subjects were studied in a placebo controlled double masked three way crossover design. Subjects received intravenous infusions of either placebo or stepwise increasing doses of phenylephrine (0.5, 1, or 2 µg/kg/min) or sodium nitroprusside (0.5, 1, or 2 µg/kg/min). Retinal vessel diameters were measured with the new Zeiss retinal vessel analyser. Retinal leucocyte velocity, flow, and density were measured with the blue field entoptic technique. The reproducibility of measurements was assessed with coefficients of variation and intraclass correlation coefficients. RESULTS—Placebo and phenylephrine did not influence retinal haemodynamics, although the α receptor antagonist significantly increased blood pressure. Sodium nitroprusside induced a significant increase in retinal venous and arterial diameters (p<0.001 each), leucocyte density (p=0.001), and leucocyte flow (p=0.024) despite lowering blood pressure to a significant degree. For venous and arterial vessel size measurements short term coefficients of variation were 1.3% and 2.6% and intraclass correlation coefficients were 0.98 and 0.96, respectively. The sensitivity was between 3% and 5% for retinal veins and 5% and 7% for retinal arteries. CONCLUSIONS—These data indicate that the Zeiss retinal vessel analyser is an accurate system for the assessment of retinal diameters in healthy subjects. In addition, nitric oxide appears to have a strong influence on retinal vascular tone. PMID:11049956
Treatments for Metastatic Prostate Cancer (mPC): A Review of Costing Evidence.
Norum, Jan; Nieder, Carsten
2017-12-01
Prostate cancer (PC) is the most common cancer in Western countries. More than one third of PC patients develop metastatic disease, and the 5-year expected survival in distant disease is about 35%. During the last few years, new treatments have been launched for metastatic castrate-resistant prostate cancer (mCRPC). We aimed to review the current literature on health economic analysis on the treatment of metastatic prostate cancer (mPC), compare the studies, summarize the findings and make the results available to administrators and decision makers. A systematic literature search was done for economic evaluations (cost-minimization, cost-effectiveness, cost-utility, cost-of-illness, cost-of-drug, and cost-benefit analyses). We employed the PubMed ® search engine and searched for publications published between 2012 and 2016. The terms used were "prostate cancer", "metastatic" and "cost". An initial screening of all headlines was performed, selected abstracts were analysed, and finally the full papers investigated. Study characteristics, treatment and comparator, country, type of evaluation, perspective, year of value, time horizon, efficacy data, discount rate, total costs and sensitivity analysis were analysed. The quality was assessed using the Quality of Health Economic Studies (QHES) instrument. A total of 227 publications were detected and screened, 58 selected for full-text assessment and 31 included in the final analyses. Despite the significant international literature on the treatment of mCRPC, there were only 15 studies focusing on cost-effectiveness analysis (CEA). Medical treatment constituted two thirds of the selected studies. Significant costs in the treatment of mCRPC were disclosed. In the pre-docetaxel setting, both abiraterone acetate (AA) and enzalutamide were concluded beyond accepted cost/quality-adjusted life year limits. In the docetaxel refractory setting, most studies concluded that enzalutamide was cost-effective and superior to AA. In most studies, cabazitaxel was not recommended, because of high cost. Looking at bone-targeting drugs, generic zoledronic acid (ZA) was recommended. External beam radiotherapy (EBRT) was analysed in three studies, and single fraction radiotherapy was concluded to be cost saving. Radium-223 was documented as beneficial, but costly. The quality of the studies was generally good, but sensitivity analyses, discounting and the measurement of health outcomes were present in less than two thirds of the selected studies. The treatment of mCRPC was associated with significant cost. In the post-docetaxel setting, single fraction radiotherapy and enzalutamide were considered cost-effective in most studies. Generic ZA was the recommended bone-targeting therapy.
Chen, Chong-Cheng; Chen, Yi; Liu, Xia; Wen, Yue; Ma, Deng-Yan; Huang, Yue-Yang; Pu, Li; Diao, Yong-Shu; Yang, Kun
2016-01-01
The impacts of nurse-led disease management programs on the quality of life for patients with chronic kidney disease have not been extensively studied. Furthermore, results of the existing related studies are inconsistent. The focus of the proposed meta-analysis is to evaluate the efficacy of nurse-led disease management programs in improving the quality of life for patients with chronic kidney disease. Literature survey was performed to identify the eligible studies from PubMed, Current Nursing and Allied Health Literature, and Cochrane Central Register of Controlled Trials with predefined terms. The outcome measured was quality of life. This meta-analysis was conducted in line with recommendations from the preferred reporting items for systematic reviews and meta-analyses. Eight studies comprising a total of 1520 patients were included in this meta-analysis, with 766 patients assigned to the nurse-led disease management program. Nurse-led disease management improved the quality of life in terms of symptoms, sleep, staff encouragement, pain, general health perception, energy/fatigue, overall health and mental component summary when evaluated 6 weeks after the beginning of intervention. When evaluated 12 weeks later, the quality of life in terms of symptoms, sleep, staff encouragement, energy/fatigue, and physical component summary was improved. Stratified by the modalities of dialysis, similar results of pooled analyses were observed for patients with peritoneal dialysis or hemodialysis, compared with the overall analyses. The results of sensitivity analyses were the same as the primary analyses. The symmetric funnel plot suggested that the possibility of potential publication bias was relatively low. Nurse-led disease management program seems effective to improve some parameters of quality of life for patients with chronic kidney disease. However, the seemingly promising results should be cautiously interpreted and generalized and still need to be confirmed through well-designed large-scale prospective randomized controlled trials.
Economic evaluation of DNA ploidy analysis vs liquid-based cytology for cervical screening.
Nghiem, V T; Davies, K R; Beck, J R; Follen, M; MacAulay, C; Guillaud, M; Cantor, S B
2015-06-09
DNA ploidy analysis involves automated quantification of chromosomal aneuploidy, a potential marker of progression toward cervical carcinoma. We evaluated the cost-effectiveness of this method for cervical screening, comparing five ploidy strategies (using different numbers of aneuploid cells as cut points) with liquid-based Papanicolaou smear and no screening. A state-transition Markov model simulated the natural history of HPV infection and possible progression into cervical neoplasia in a cohort of 12-year-old females. The analysis evaluated cost in 2012 US$ and effectiveness in quality-adjusted life-years (QALYs) from a health-system perspective throughout a lifetime horizon in the US setting. We calculated incremental cost-effectiveness ratios (ICERs) to determine the best strategy. The robustness of optimal choices was examined in deterministic and probabilistic sensitivity analyses. In the base-case analysis, the ploidy 4 cell strategy was cost-effective, yielding an increase of 0.032 QALY and an ICER of $18 264/QALY compared to no screening. For most scenarios in the deterministic sensitivity analysis, the ploidy 4 cell strategy was the only cost-effective strategy. Cost-effectiveness acceptability curves showed that this strategy was more likely to be cost-effective than the Papanicolaou smear. Compared to the liquid-based Papanicolaou smear, screening with a DNA ploidy strategy appeared less costly and comparably effective.
Kleinsorge, F; Smetanay, K; Rom, J; Hörmansdörfer, C; Hörmannsdörfer, C; Scharf, A; Schmidt, P
2010-12-01
In 2008, 2 351 first trimester screenings were calculated by a newly developed internet database ( http:// www.firsttrimester.net ) to evaluate the risk for the presence of Down's syndrome. All data were evaluated by the conventional first trimester screening according to Nicolaides (FTS), based on the previous JOY Software, and by the advanced first trimester screening (AFS). After receiving the feedback of the karyotype as well as the rates of the correct positives, correct negatives, false positives, false negatives, the sensitivity and specificity were calculated and compared. Overall 255 cases were investigated which were analysed by both methods. These included 2 cases of Down's syndrome and one case of trisomy 18. The FTS and the AFS had a sensitivity of 100%. The specificity was 88.5% for the FTS and 93.0% for the AFS. As already shown in former studies, the higher specificity of the AFS is a result of a reduction of the false positive rate (28 to 17 cases). As a consequence of the AFS with a detection rate of 100% the rate of further invasive diagnostics in pregnant women is decreased by having 39% fewer positive tested women. © Georg Thieme Verlag KG Stuttgart · New York.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments.
Youngblut, Nicholas D; Barnett, Samuel E; Buckley, Daniel H
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments.
SIPSim: A Modeling Toolkit to Predict Accuracy and Aid Design of DNA-SIP Experiments
Youngblut, Nicholas D.; Barnett, Samuel E.; Buckley, Daniel H.
2018-01-01
DNA Stable isotope probing (DNA-SIP) is a powerful method that links identity to function within microbial communities. The combination of DNA-SIP with multiplexed high throughput DNA sequencing enables simultaneous mapping of in situ assimilation dynamics for thousands of microbial taxonomic units. Hence, high throughput sequencing enabled SIP has enormous potential to reveal patterns of carbon and nitrogen exchange within microbial food webs. There are several different methods for analyzing DNA-SIP data and despite the power of SIP experiments, it remains difficult to comprehensively evaluate method accuracy across a wide range of experimental parameters. We have developed a toolset (SIPSim) that simulates DNA-SIP data, and we use this toolset to systematically evaluate different methods for analyzing DNA-SIP data. Specifically, we employ SIPSim to evaluate the effects that key experimental parameters (e.g., level of isotopic enrichment, number of labeled taxa, relative abundance of labeled taxa, community richness, community evenness, and beta-diversity) have on the specificity, sensitivity, and balanced accuracy (defined as the product of specificity and sensitivity) of DNA-SIP analyses. Furthermore, SIPSim can predict analytical accuracy and power as a function of experimental design and community characteristics, and thus should be of great use in the design and interpretation of DNA-SIP experiments. PMID:29643843
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pace, J.V. III; Bartine, D.E.; Mynatt, F.R.
1976-01-01
Two-dimensional neutron and secondary gamma-ray transport calculations and cross-section sensitivity analyses have been performed to determine the effects of varying source heights and cross sections on calculated doses. The air-over-ground calculations demonstrate the existence of an optimal height of burst for a specific ground range and indicate under what conditions they are conservative with respect to infinite air calculations. The air-over-seawater calculations showed the importance of hydrogen and chlorine in gamma production. Additional sensitivity analyses indicated the importance of water in the ground, the amount of reduction in ground thickness for calculational purposes, and the effect of the degree ofmore » Legendre angular expansion of the scattering cross-sections (P/sub l/) on the calculated dose.« less
Evaluation of the divided attention condition during functional analyses.
Fahmie, Tara A; Iwata, Brian A; Harper, Jill M; Querim, Angie C
2013-01-01
A common condition included in most functional analyses (FAs) is the attention condition, in which the therapist ignores the client by engaging in a solitary activity (antecedent event) but delivers attention to the client contingent on problem behavior (consequent event). The divided attention condition is similar, except that the antecedent event consists of the therapist conversing with an adult confederate. We compared the typical and divided attention conditions to determine whether behavior in general (Study 1) and problem behavior in particular (Study 2) were more sensitive to one of the test conditions. Results showed that the divided attention condition resulted in faster acquisition or more efficient FA results for 2 of 9 subjects, suggesting that the divided attention condition could be considered a preferred condition when resources are available. © Society for the Experimental Analysis of Behavior.
Value of shared preclinical safety studies - The eTOX database.
Briggs, Katharine; Barber, Chris; Cases, Montserrat; Marc, Philippe; Steger-Hartmann, Thomas
2015-01-01
A first analysis of a database of shared preclinical safety data for 1214 small molecule drugs and drug candidates extracted from 3970 reports donated by thirteen pharmaceutical companies for the eTOX project (www.etoxproject.eu) is presented. Species, duration of exposure and administration route data were analysed to assess if large enough subsets of homogenous data are available for building in silico predictive models. Prevalence of treatment related effects for the different types of findings recorded were analysed. The eTOX ontology was used to determine the most common treatment-related clinical chemistry and histopathology findings reported in the database. The data were then mined to evaluate sensitivity of established in vivo biomarkers for liver toxicity risk assessment. The value of the database to inform other drug development projects during early drug development is illustrated by a case study.
Pi, Shan; Cao, Rong; Qiang, Jin Wei; Guo, Yan Hui
2018-01-01
Background Diffusion-weighted imaging (DWI) and quantitative apparent diffusion coefficient (ADC) values are widely used in the differential diagnosis of ovarian tumors. Purpose To assess the diagnostic performance of quantitative ADC values in ovarian tumors. Material and Methods PubMed, Embase, the Cochrane Library, and local databases were searched for studies assessing ovarian tumors using quantitative ADC values. We quantitatively analyzed the diagnostic performances for two clinical problems: benign vs. malignant tumors and borderline vs. malignant tumors. We evaluated diagnostic performances by the pooled sensitivity and specificity values and by summary receiver operating characteristic (SROC) curves. Subgroup analyses were used to analyze study heterogeneity. Results From the 742 studies identified in the search results, 16 studies met our inclusion criteria. A total of ten studies evaluated malignant vs. benign ovarian tumors and six studies assessed malignant vs. borderline ovarian tumors. Regarding the diagnostic accuracy of quantitative ADC values for distinguishing between malignant and benign ovarian tumors, the pooled sensitivity and specificity values were 0.91 and 0.91, respectively. The area under the SROC curve (AUC) was 0.96. For differentiating borderline from malignant tumors, the pooled sensitivity and specificity values were 0.89 and 0.79, and the AUC was 0.91. The methodological quality of the included studies was moderate. Conclusion Quantitative ADC values could serve as useful preoperative markers for predicting the nature of ovarian tumors. Nevertheless, prospective trials focused on standardized imaging parameters are needed to evaluate the clinical value of quantitative ADC values in ovarian tumors.
Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours
Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran
2017-01-01
Introduction IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. Aim To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. Materials and Methods A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Results Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. Conclusion IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use. PMID:28969237
Jia, Qing; Brown, Michael J; Clifford, Leanne; Wilson, Gregory A; Truty, Mark J; Stubbs, James R; Schroeder, Darrell R; Hanson, Andrew C; Gajic, Ognjen; Kor, Daryl J
2016-03-01
Perioperative haemorrhage negatively affects patient outcomes and results in substantial consumption of health-care resources. Plasma transfusions are often administered to address abnormal preoperative coagulation tests, with the hope to mitigate bleeding complications. We aimed to assess the associations between preoperative plasma transfusion and bleeding complications in patients with elevated international normalised ratio (INR) undergoing non-cardiac surgery. We did an observational study in a consecutive sample of adult patients undergoing non-cardiac surgery with preoperative INR greater than or equal to 1·5. The exposure of interest was transfusion of preoperative plasma for elevated INR. The primary outcome was WHO grade 3 bleeding in the early perioperative period (from entry into the operating room until 24 h following exit from operating room). Hypotheses were tested with univariate and propensity-matched analyses. We did multiple sensitivity analyses to further evaluate the robustness of study findings. Between Jan 1, 2008, and Dec 31, 2011, we identified 1234 (8·4%) of 14 743 patients who had an INR of 1·5 or above and were included in this investigation. Of 1234 study participants, 139 (11%) received a preoperative plasma transfusion. WHO grade 3 bleeding occurred in 73 (53%) of 139 patients who received preoperative plasma compared with 350 (32%) of 1095 patients who did not (odds ratio [OR] 2·35, 95% CI 1·65-3·36; p<0·0001). Among the propensity-matched cohort, 65 (52%) of 125 plasma recipients had WHO grade 3 bleeding compared with 97 (40%) of 242 of those who did not receive preoperative plasma (OR 1·75, 95% CI 1·09-2·81; p=0·021). Results from multiple sensitivity analyses were qualitatively similar. Preoperative plasma transfusion for elevated international normalised ratios was associated with an increased frequency of perioperative bleeding complications. Findings were robust in the sensitivity analyses, suggestive that more conservative management of abnormal preoperative international normalised ratios is warranted. Mayo Clinic, National Institutes of Health. Copyright © 2016 Elsevier Ltd. All rights reserved.
A critical comparison of ten disposable cup LCAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harst, Eugenie van der, E-mail: eugenie.vanderharst@wur.nl; Potting, José, E-mail: jose.potting@wur.nl; Environmental Strategies Research
Disposable cups can be made from conventional petro-plastics, bioplastics, or paperboard (coated with petro-plastics or bioplastics). This study compared ten life cycle assessment (LCA) studies of disposable cups with the aim to evaluate the robustness of their results. The selected studies have only one impact category in common, namely climate change with global warming potential (GWP) as its category indicator. Quantitative GWP results of the studies were closer examined. GWPs within and across each study show none of the cup materials to be consistently better than the others. Comparison of the absolute GWPs (after correction for the cup volume) alsomore » shows no consistent better or worse cup material. An evaluation of the methodological choices and the data sets used in the studies revealed their influence on the GWP. The differences in GWP can be attributed to a multitude of factors, i.e., cup material and weight, production processes, waste processes, allocation options, and data used. These factors basically represent different types of uncertainty. Sensitivity and scenario analyses provided only the influence of one factor at once. A systematic and simultaneous use of sensitivity and scenario analyses could, in a next research, result in more robust outcomes. -- Highlights: • Conflicting results from life cycle assessment (LCA) on disposable cups • GWP results of LCAs did not point to a best or worst cup material. • Differences in GWP results are due to methodological choices and data sets used. • Standardized LCA: transparency of LCA studies, but still different in approaches.« less
Ruiz-Ramos, Jesus; Frasquet, Juan; Romá, Eva; Poveda-Andres, Jose Luis; Salavert-Leti, Miguel; Castellanos, Alvaro; Ramirez, Paula
2017-06-01
To evaluate the cost-effectiveness of antimicrobial stewardship (AS) program implementation focused on critical care units based on assumptions for the Spanish setting. A decision model comparing costs and outcomes of sepsis, community-acquired pneumonia, and nosocomial infections (including catheter-related bacteremia, urinary tract infection, and ventilator-associated pneumonia) in critical care units with or without an AS was designed. Model variables and costs, along with their distributions, were obtained from the literature. The study was performed from the Spanish National Health System (NHS) perspective, including only direct costs. The Incremental Cost-Effectiveness Ratio (ICER) was analysed regarding the ability of the program to reduce multi-drug resistant bacteria. Uncertainty in ICERs was evaluated with probabilistic sensitivity analyses. In the short-term, implementing an AS reduces the consumption of antimicrobials with a net benefit of €71,738. In the long-term, the maintenance of the program involves an additional cost to the system of €107,569. Cost per avoided resistance was €7,342, and cost-per-life-years gained (LYG) was €9,788. Results from the probabilistic sensitivity analysis showed that there was a more than 90% likelihood that an AS would be cost-effective at a level of €8,000 per LYG. Wide variability of economic results obtained from the implementation of this type of AS program and short information on their impact on patient evolution and any resistance avoided. Implementing an AS focusing on critical care patients is a long-term cost-effective tool. Implementation costs are amortized by reducing antimicrobial consumption to prevent infection by multidrug-resistant pathogens.
Bertheloot, Jessica; Wu, Qiongli; Cournède, Paul-Henry; Andrieu, Bruno
2011-10-01
Simulating nitrogen economy in crop plants requires formalizing the interactions between soil nitrogen availability, root nitrogen acquisition, distribution between vegetative organs and remobilization towards grains. This study evaluates and analyses the functional-structural and mechanistic model of nitrogen economy, NEMA (Nitrogen Economy Model within plant Architecture), developed for winter wheat (Triticum aestivum) after flowering. NEMA was calibrated for field plants under three nitrogen fertilization treatments at flowering. Model behaviour was investigated and sensitivity to parameter values was analysed. Nitrogen content of all photosynthetic organs and in particular nitrogen vertical distribution along the stem and remobilization patterns in response to fertilization were simulated accurately by the model, from Rubisco turnover modulated by light intercepted by the organ and a mobile nitrogen pool. This pool proved to be a reliable indicator of plant nitrogen status, allowing efficient regulation of nitrogen acquisition by roots, remobilization from vegetative organs and accumulation in grains in response to nitrogen treatments. In our simulations, root capacity to import carbon, rather than carbon availability, limited nitrogen acquisition and ultimately nitrogen accumulation in grains, while Rubisco turnover intensity mostly affected dry matter accumulation in grains. NEMA enabled interpretation of several key patterns usually observed in field conditions and the identification of plausible processes limiting for grain yield, protein content and root nitrogen acquisition that could be targets for plant breeding; however, further understanding requires more mechanistic formalization of carbon metabolism. Its strong physiological basis and its realistic behaviour support its use to gain insights into nitrogen economy after flowering.
Sathe, Prachee; Maddani, Sagar; Kulkarni, Shilpa; Munshi, Nita
2017-10-01
Ventilator associated pneumonia (VAP) is one of the most serious nosocomial infections in Intensive Care Unit (ICU). The aim of this study was to evaluate a new approach to spare the carbapenems for the management of patients diagnosed with VAP due to Acinetobacter baumannii (A. baumannii). This retrospective study was conducted on VAP patients presenting for treatment at tertiary care centre between May 2014 and March 2016. The case sheets of patients who have been treated for VAP with meropenem, antibiotic adjuvant entity (AAE) and colistin were analysed. Out of 113 patients analysed, 24 (21.3%) patients were having VAP due to MDR A. baumannii. Microbial sensitivity has shown that 87.5% of patients were sensitive to AAE and colistin whereas all of them were resistant to meropenem, imipenem and gentamycin. The mean treatment durations were 12.4±2.1, 13.2±2.4 and 14.3±2.1days for AAE, meropenem+colistin and AAE+colistin treatment groups. In AAE susceptible patients, the mean treatment duration and cost could be reduced by 23-24% and 43-53% if AAE is used empirically. In AAE-resistant patients, the mean treatment duration and cost could be reduced by 21% and 26% if AAE+colistin regime is used empirically instead of meropenem followed by AAE+colistin. Clinical assessment with microbial eradication and pharmaco-economic evaluation clearly shows benefits in using AAE empirically in the management of A. baumannii infected VAP cases. Copyright © 2017 Elsevier Inc. All rights reserved.
Takemura, Hiroyuki; Ai, Tomohiko; Kimura, Konobu; Nagasaka, Kaori; Takahashi, Toshihiro; Tsuchiya, Koji; Yang, Haeun; Konishi, Aya; Uchihashi, Kinya; Horii, Takashi; Tabe, Yoko; Ohsaka, Akimichi
2018-01-01
The XN series automated hematology analyzer has been equipped with a body fluid (BF) mode to count and differentiate leukocytes in BF samples including cerebrospinal fluid (CSF). However, its diagnostic accuracy is not reliable for CSF samples with low cell concentration at the border between normal and pathologic level. To overcome this limitation, a new flow cytometry-based technology, termed "high sensitive analysis (hsA) mode," has been developed. In addition, the XN series analyzer has been equipped with the automated digital cell imaging analyzer DI-60 to classify cell morphology including normal leukocytes differential and abnormal malignant cells detection. Using various BF samples, we evaluated the performance of the XN-hsA mode and DI-60 compared to manual microscopic examination. The reproducibility of the XN-hsA mode showed good results in samples with low cell densities (coefficient of variation; % CV: 7.8% for 6 cells/μL). The linearity of the XN-hsA mode was established up to 938 cells/μL. The cell number obtained using the XN-hsA mode correlated highly with the corresponding microscopic examination. Good correlation was also observed between the DI-60 analyses and manual microscopic classification for all leukocyte types, except monocytes. In conclusion, the combined use of cell counting with the XN-hsA mode and automated morphological analyses using the DI-60 mode is potentially useful for the automated analysis of BF cells.
Veldhuis, Anouk; Brouwer-Middelesch, Henriëtte; Marceau, Alexis; Madouasse, Aurélien; Van der Stede, Yves; Fourichon, Christine; Welby, Sarah; Wever, Paul; van Schaik, Gerdien
2016-02-01
This study aimed to evaluate the use of routinely collected reproductive and milk production data for the early detection of emerging vector-borne diseases in cattle in the Netherlands and the Flanders region of Belgium (i.e., the northern part of Belgium). Prospective space-time cluster analyses on residuals from a model on milk production were carried out to detect clusters of reduced milk yield. A CUSUM algorithm was used to detect temporal aberrations in model residuals of reproductive performance models on two indicators of gestation length. The Bluetongue serotype-8 (BTV-8) epidemics of 2006 and 2007 and the Schmallenberg virus (SBV) epidemic of 2011 were used as case studies to evaluate the sensitivity and timeliness of these methods. The methods investigated in this study did not result in a more timely detection of BTV-8 and SBV in the Netherlands and BTV-8 in Belgium given the surveillance systems in place when these viruses emerged. This could be due to (i) the large geographical units used in the analyses (country, region and province level), and (ii) the high level of sensitivity of the surveillance systems in place when these viruses emerged. Nevertheless, it might be worthwhile to use a syndromic surveillance system based on non-specific animal health data in real-time alongside regular surveillance, to increase the sense of urgency and to provide valuable quantitative information for decision makers in the initial phase of an emerging disease outbreak. Copyright © 2015 Elsevier B.V. All rights reserved.
Cost-effectiveness of a smokeless tobacco control mass media campaign in India.
Murukutla, Nandita; Yan, Hongjin; Wang, Shuo; Negi, Nalin Singh; Kotov, Alexey; Mullin, Sandra; Goodchild, Mark
2017-08-10
Tobacco control mass media campaigns are cost-effective in reducing tobacco consumption in high-income countries, but similar evidence from low-income countries is limited. An evaluation of a 2009 smokeless tobacco control mass media campaign in India provided an opportunity to test its cost-effectiveness. Campaign evaluation data from a nationally representative household survey of 2898 smokeless tobacco users were compared with campaign costs in a standard cost-effectiveness methodology. Costs and effects of the Surgeon campaign were compared with the status quo to calculate the cost per campaign-attributable benefit, including quit attempts, permanent quits and tobacco-related deaths averted. Sensitivity analyses at varied CIs and tobacco-related mortality risk were conducted. The Surgeon campaign was found to be highly cost-effective. It successfully generated 17 259 148 additional quit attempts, 431 479 permanent quits and 120 814 deaths averted. The cost per benefit was US$0.06 per quit attempt, US$2.6 per permanent quit and US$9.2 per death averted. The campaign continued to be cost-effective in sensitivity analyses. This study suggests that tobacco control mass media campaigns can be cost-effective and economically justified in low-income and middle-income countries. It holds significant policy implications, calling for sustained investment in evidence-based mass media campaigns as part of a comprehensive tobacco control strategy. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Evaluations of alcohol consequences moderate social anxiety risk for problematic drinking.
Nitka, Danit; O'Connor, Roisin M
2017-02-01
The link between social anxiety (SA) and problematic drinking is complex; this seems predominantly true among young adults. Individuals high on SA are thought to be particularly sensitive to the negative effects of alcohol, which should deter them from drinking. Yet, some evidence suggests that those high on SA continue to drink despite experiencing negative alcohol-related consequences (NACs) (Morris, Stewart, & Ham, 2005). Although traditionally, researchers assume NACs are perceived as averse, emerging evidence suggests these are not categorically viewed as negative by undergraduates. The study goal was to test whether evaluations of NACs moderate the effect of SA on problematic drinking. It was hypothesized that high SA would predict elevated alcohol use and number of NACs experienced, but only for those who evaluate NACs as less negative. Undergraduate drinkers (N=130, 80 women) completed self-reports of social anxiety, NAC evaluations (ratings of how 'bad' experiencing each alcohol-related consequence would be), alcohol use, and NACs experienced. Regression analyses revealed that NAC evaluations moderated the effect of SA on number of NACs experienced, but not the effect of SA on weekly alcohol use. Simple slopes analyses showed that high SA was associated with elevated NACs experienced for those with weak negative NAC evaluations, controlling for alcohol use. These findings help explain the mixed SA-problematic drinking literature by identifying perceptions of NACs as an important moderator of SA risk for experiencing NACs. Moreover, clinical interventions aimed at reducing SA risk for undergraduate problematic drinking may benefit from targeting NAC evaluations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inhomogeneous Forcing and Transient Climate Sensitivity
NASA Technical Reports Server (NTRS)
Shindell, Drew T.
2014-01-01
Understanding climate sensitivity is critical to projecting climate change in response to a given forcing scenario. Recent analyses have suggested that transient climate sensitivity is at the low end of the present model range taking into account the reduced warming rates during the past 10-15 years during which forcing has increased markedly. In contrast, comparisons of modelled feedback processes with observations indicate that the most realistic models have higher sensitivities. Here I analyse results from recent climate modelling intercomparison projects to demonstrate that transient climate sensitivity to historical aerosols and ozone is substantially greater than the transient climate sensitivity to CO2. This enhanced sensitivity is primarily caused by more of the forcing being located at Northern Hemisphere middle to high latitudes where it triggers more rapid land responses and stronger feedbacks. I find that accounting for this enhancement largely reconciles the two sets of results, and I conclude that the lowest end of the range of transient climate response to CO2 in present models and assessments (less than 1.3 C) is very unlikely.
Suzana, Shirly; Ninan, Marilyn M; Gowri, Mahasampath; Venkatesh, Krishnan; Rupali, Priscilla; Michael, Joy S
2016-03-01
The Xpert MTB/Rif, with a detection limit of 131 CFU/ml, plays a valuable role in the diagnosis of extrapulmonary tuberculosis, both susceptible and resistant. This study aims at evaluating the Xpert MTB/Rif for the same, at a tertiary care centre in south India, assessing it against both culture and a composite gold standard (CGS). We tested consecutive samples from patients suspected of extrapulmonary tuberculosis with Xpert MTB/Rif, evaluated its sensitivity and specificity against solid and/or liquid culture and CGS. An individual analysis of different sample types (tissue biopsies, fluids, pus, lymph node biopsies and CSF) given an adequate sample size, against both culture and CGS, was also performed. In total, 494 samples were analysed against culture. Compared to culture, the sensitivity of Xpert MTB/Rif was 89% (95% CI 0.81-0.94) and its specificity was 74% (95% CI 0.70-0.78). When Xpert MTB/Rif was compared to the CGS, pooled sensitivity was 62% (95% CI 0.56-0.67) and specificity was 100% (95% CI 0.91-1.00). This assay performs better than the currently available conventional laboratory methods. The rapidity with which results are obtained is an added advantage, and its integration into a routine diagnostic protocol must be considered. © 2015 John Wiley & Sons Ltd.
Abdalla, G; Fawzi Matuk, R; Venugopal, V; Verde, F; Magnuson, T H; Schweitzer, M A; Steele, K E
2015-08-01
To search the literature for further evidence for the use of magnetic resonance venography (MRV) in the detection of suspected DVT and to re-evaluate the accuracy of MRV in the detection of suspected deep vein thrombosis (DVT). PubMed, EMBASE, Scopus, Cochrane, and Web of Science were searched. Study quality and the risk of bias were evaluated using the QUADAS 2. A random effects meta-analysis including subgroup and sensitivity analyses were performed. The search resulted in 23 observational studies all from academic centres. Sixteen articles were included in the meta-analysis. The summary estimates for MRV as a diagnostic non-invasive tool revealed a sensitivity of 93% (95% confidence interval [CI]: 89% to 95%) and specificity of 96% (95% CI: 94% to 97%). The heterogeneity of the studies was high. Inconsistency (I2) for sensitivity and specificity was 80.7% and 77.9%, respectively. Further studies investigating the use of MRV in the detection of suspected DVT did not offer further evidence to support the replacement of ultrasound with MRV as the first-line investigation. However, MRV may offer an alternative tool in the detection/diagnosis of DVT for whom ultrasound is inadequate or not feasible (such as in the obese patient). Copyright © 2015 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Evaluation of the accuracy of the EasyTest™ malaria Pf/Pan Ag, a rapid diagnostic test, in Uganda.
Chong, Chom-Kyu; Cho, Pyo Yun; Na, Byoung-Kuk; Ahn, Seong Kyu; Kim, Jin Su; Lee, Jin-Soo; Lee, Sung-Keun; Han, Eun-Taek; Kim, Hak-Yong; Park, Yun-Kyu; Cha, Seok Ho; Kim, Tong-Soo
2014-10-01
In recent years, rapid diagnostic tests (RDTs) have been widely used for malaria detection, primarily because of their simple operation, fast results, and straightforward interpretation. The Asan EasyTest™ Malaria Pf/Pan Ag is one of the most commonly used malaria RDTs in several countries, including Korea and India. In this study, we tested the diagnostic performance of this RDT in Uganda to evaluate its usefulness for field diagnosis of malaria in this country. Microscopic and PCR analyses, and the Asan EasyTest™ Malaria Pf/Pan Ag rapid diagnostic test, were performed on blood samples from 185 individuals with suspected malaria in several villages in Uganda. Compared to the microscopic analysis, the sensitivity of the RDT to detect malaria infection was 95.8% and 83.3% for Plasmodium falciparum and non-P. falciparum, respectively. Although the diagnostic sensitivity of the RDT decreased when parasitemia was ≤500 parasites/µl, it showed 96.8% sensitivity (98.4% for P. falciparum and 93.8% for non-P. falciparum) in blood samples with parasitemia ≥100 parasites/µl. The specificity of the RDT was 97.3% for P. falciparum and 97.3% for non-P. falciparum. These results collectively suggest that the accuracy of the Asan EasyTest™ Malaria Pf/Pan Ag makes it an effective point-of-care diagnostic tool for malaria in Uganda.
Evaluation of the Accuracy of the EasyTest™ Malaria Pf/Pan Ag, a Rapid Diagnostic Test, in Uganda
Chong, Chom-Kyu; Cho, Pyo Yun; Na, Byoung-Kuk; Ahn, Seong Kyu; Kim, Jin Su; Lee, Jin-Soo; Lee, Sung-Keun; Han, Eun-Taek; Kim, Hak-Yong; Park, Yun-Kyu; Cha, Seok Ho
2014-01-01
In recent years, rapid diagnostic tests (RDTs) have been widely used for malaria detection, primarily because of their simple operation, fast results, and straightforward interpretation. The Asan EasyTest™ Malaria Pf/Pan Ag is one of the most commonly used malaria RDTs in several countries, including Korea and India. In this study, we tested the diagnostic performance of this RDT in Uganda to evaluate its usefulness for field diagnosis of malaria in this country. Microscopic and PCR analyses, and the Asan EasyTest™ Malaria Pf/Pan Ag rapid diagnostic test, were performed on blood samples from 185 individuals with suspected malaria in several villages in Uganda. Compared to the microscopic analysis, the sensitivity of the RDT to detect malaria infection was 95.8% and 83.3% for Plasmodium falciparum and non-P. falciparum, respectively. Although the diagnostic sensitivity of the RDT decreased when parasitemia was ≤500 parasites/µl, it showed 96.8% sensitivity (98.4% for P. falciparum and 93.8% for non-P. falciparum) in blood samples with parasitemia ≥100 parasites/µl. The specificity of the RDT was 97.3% for P. falciparum and 97.3% for non-P. falciparum. These results collectively suggest that the accuracy of the Asan EasyTest™ Malaria Pf/Pan Ag makes it an effective point-of-care diagnostic tool for malaria in Uganda. PMID:25352698
Economic evaluation of floseal compared to nasal packing for the management of anterior epistaxis.
Le, Andre; Thavorn, Kednapa; Lasso, Andrea; Kilty, Shaun J
2018-01-04
To evaluate the cost-effectiveness of Floseal, a topically applied hemostatic agent, and nasal packing for the management of epistaxis in Canada. Outcomes research, a cost-utility analysis. We developed a Markov model to compare the costs and health outcomes of Floseal with nasal packing over a lifetime horizon from the perspective of a publicly funded healthcare system. A cycle length of 1 year was used. Efficacy of Floseal and packing was sought from the published literature. Unit costs were gathered from a hospital case costing system, whereas physician fees were extracted from the Ontario Schedule of Benefits for Physician Services. Results were expressed as an incremental cost per quality-adjusted life year (QALY) gained. A series of one-way sensitivity and probabilistic sensitivity analyses were performed. From the perspective of a publicly funded health are system, the Floseal treatment strategy was associated with higher costs ($2,067) and greater QALYs (0.27) than nasal packing. Our findings were highly sensitive to discount rates, the cost of Floseal, and the cost of nasal packing. The probabilistic sensitivity analysis suggested that the probability that Floseal treatment is cost-effective reached 99% if the willingness-to-pay threshold was greater than $120,000 per QALY gained. Prior studies have demonstrated Floseal to be an effective treatment for anterior epistaxis. In the Canadian healthcare system, Floseal treatment appears to be a cost-effective treatment option compared to nasal packing for anterior epistaxis. 2c Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
Haubro, M; Stougaard, C; Torfing, T; Overgaard, S
2015-08-01
To estimate sensitivity and specificity of CT and MRI examinations in patients with fractures of the proximal femur. To determine the interobserver agreement of the modalities among a senior consulting radiologist, a resident in radiology and a resident in orthopaedics surgery. 67 patients (27 males, 40 females, mean age 80.5) seen in the emergency room with hip pain after fall, inability to stand and a primary X-ray without fracture were evaluated with both CT and MRI. The images were analysed by a senior consulting musculoskeletal radiologist, a resident in radiology and a resident in orthopaedic surgery. Sensitivity and specificity were estimated with MRI as the golden standard. Kappa value was used to assess level of agreement in both MRI and CT finding. 15 fractures of the proximal femur were found (7 intertrochanteric-, 3 femoral neck and 5 fractures of the greater trochanter). Two fractures were not identified by CT and four changed fracture location. Among those, three patients underwent surgery. Sensitivity of CT was 0.87; 95% CI [0.60; 0.98]. Kappa for interobserver agreement for CT were 0.46; 95% CI [0.23; 0.76] and 0.67; 95% CI [0.42; 0.90]. For MRI 0.67; 95% CI [0.43; 0.91] and 0.69; 95% CI [0.45; 0.92]. MRI was observed to have a higher diagnostic accuracy than CT in detecting occult fractures of the hip. Interobserver analysis showed high kappa values corresponding substantial agreement in both CT and MRI. Copyright © 2015 Elsevier Ltd. All rights reserved.
Prevalence of cold sensitivity in patients with hand pathology.
Novak, Christine B; McCabe, Steven J
2015-06-01
The purpose of this study was to evaluate the prevalence of cold sensitivity in patients with hand- and wrist-related diagnoses. We included English-speaking adults who were more than 1 month following hand injury or onset of symptoms. Patients were asked if exposure to cold air or water provoked cold-related symptoms and to rank symptom severity (scale 0-10). Statistical analyses evaluated the relationships between the cold sensitivity and independent variables (age, gender, history of trauma, and time from injury/symptoms). There were 197 patients (mean age 49 ± 16 years): 98 trauma and 99 non-trauma cases. Cold-induced symptoms were reported by 34 %, with 10 % reporting severe symptoms. Exposure to cold air is the most common catalyst; mean severity score was 6.7 ± 2.2. Those with traumatic injuries compared to non-trauma diagnoses reported significantly more cold-induced symptoms (p = .04). Using backward linear regression, the significant predictors of cold symptom severity were trauma (p = .004) and time since onset (p = .003). Including only the trauma patients in the regression model, the significant predictor was time since injury (p = .005). Cold-induced symptoms are reported by more than 30 % of hand-related diagnoses, and exposure to cold air was the most commonly reported trigger. The significant predictors of cold-induced symptoms are traumatic injuries and longer time from injury. This study provides evidence of the common problem of cold sensitivity in patients with hand pathology. Prognostic Level II.
Xu, Hongyan; Li, Caixia; Suklai, Pacharaporn; Zeng, Qinghua; Chong, Raymond; Gong, Zhiyuan
2018-02-01
It has been intensively documented that there are species-differences in the sensitivity to dioxin-like compounds (DLCs) in mammalian and avian. However, this issue is still unclear in fish. This study aimed at evaluating the differential sensitivities to DLCs in fish larvae. Here, larvae of Tg(cyp1a:gfp) medaka and Tg(cyp1a:gfp) zebrafish were tested with 2,3,7,8-Tetrachlorodibenzodioxin (TCDD), polychlorinated biphenyl 126 (PCB 126) and 2,3,4,7,8,-Pentachlorodibenzofuran (PeCDF). Comparative analyses were performed on induction of GFP fluorescence, expression of endogenous cyp1a mRNAs and EROD activity between the two species after exposure to these chemicals. We found that PCB 126 and PeCDF exposure at high concentrations induced strong GFP expression in multiple organs (liver, head kidney and gut) in both medaka and zebrafish larvae. Moreover, the expression of endogenous cyp1a mRNA was significantly elevated in the zebrafish larvae exposed to TCDD, PCB 126 and PeCDF at different concentrations. Likewise, almost all the exposure conditions could cause prominent elevation of EROD activity in the zebrafish larvae, while the EROD activities were just slightly elevated in the medaka larvae exposed to 1 nM and 0.5 nM of TCDD as well as to 1.5 nM and 15 nM of PeCDF, but not in the medaka larvae exposed to PCB 126. Taken together, zebrafish was proved to be more sensitive than medaka to PCB 126 and to PeCDF in this study. The findings suggested species-specific sensitivity to DLCs in fish and will facilitate choosing a sensitive and reliable fish model or tool to evaluate the risk of dioxins and DLCs exposure. Copyright © 2017 Elsevier Ltd. All rights reserved.
Blázquez-Pérez, Antonio; San Miguel, Ramón; Mar, Javier
2013-10-01
Chronic hepatitis C is the leading cause of chronic liver disease, representing a significant burden in terms of morbidity, mortality and costs. A new scenario of therapy for hepatitis C virus (HCV) genotype 1 infection is being established with the approval of two effective HCV protease inhibitors (PIs) in combination with the standard of care (SOC), peginterferon and ribavirin. Our objective was to estimate the cost effectiveness of combination therapy with new PIs (boceprevir and telaprevir) plus peginterferon and ribavirin versus SOC in treatment-naive patients with HCV genotype 1 according to data obtained from clinical trials (CTs). A Markov model simulating chronic HCV progression was used to estimate disease treatment costs and effects over patients' lifetimes, in the Spanish national public healthcare system. The target population was treatment-naive patients with chronic HCV genotype 1, demographic characteristics for whom were obtained from the published pivotal CTs SPRINT and ADVANCE. Three options were analysed for each PI based on results from the two CTs: universal triple therapy, interleukin (IL)-28B-guided therapy and dual therapy with peginterferon and ribavirin. A univariate sensitivity analysis was performed to evaluate the uncertainty of certain parameters: age at start of treatment, transition probabilities, drug costs, CT efficacy results and a higher hazard ratio for all-cause mortality for patients with chronic HCV. Probabilistic sensitivity analyses were also carried out. Incremental cost-effectiveness ratios (ICERs) of €2012 per quality-adjusted life-year (QALY) gained were used as outcome measures. According to the base-case analysis, using dual therapy as the comparator, the alternative IL28B-guided therapy presents a more favorable ICER (€18,079/QALY for boceprevir and €25,914/QALY for telaprevir) than the universal triple therapy option (€27,594/QALY for boceprevir and €33,751/QALY for telaprevir), with an ICER clearly below the efficiency threshold for medical interventions in the Spanish setting. Sensitivity analysis showed that age at the beginning of treatment was an important factor that influenced the ICER. A potential reduction in PI costs would also clearly improve the ICER, and transition probabilities influenced the results, but to a lesser extent. Probabilistic sensitivity analyses showed that 95 % of the simulations presented an ICER below €40,000/QALY. Post hoc estimations of sustained virological responses of the IL28B-guided therapeutic option represented a limitation of the study. The therapeutic options analysed for the base-case cohort can be considered cost-effective interventions for the Spanish healthcare framework. Sensitivity analysis estimated an acceptability threshold of the IL28B-guided strategy of patients younger than 60 years.
ERIC Educational Resources Information Center
Anthony, Jason L.; Lonigan, Christopher J.; Burgess, Stephen R.; Driscoll, Kimberly; Phillips, Beth M.; Cantor, Brenlee G.
2002-01-01
This study examined relations among sensitivity to words, syllables, rhymes, and phonemes in older and younger preschoolers. Confirmatory factor analyses found that a one-factor model best explained the date from both groups of children. Only variance common to all phonological sensitivity skills was related to print knowledge and rudimentary…
Using economic evaluations to make formulary coverage decisions. So much for guidelines.
Anis, A H; Gagnon, Y
2000-07-01
It is mandatory for drug manufacturers requesting formulary inclusion under the British Columbia (BC) provincial drug plan to submit a pharmacoeconomic analysis according to published guidelines. These submissions are reviewed by the Pharmacoeconomic Initiative (PI) of BC. To assess the compliance of submitted studies with specific criteria outlined in the guidelines, to assess the methodological quality of individual submissions, and to demonstrate the importance of submitting guidelines-compliant pharmacoeconomic analyses. All submissions between January 1996 and April 1999 assessed by the PI of BC were included. Submissions were reviewed according to a checklist to establish compliance with respect to choice of comparator drug, study perspective, sensitivity analysis, analytical horizon and discounting. Submissions were examined for association between analytical technique and author, and between source of submission and compliance. Association between compliance and recommendation for approval was also examined. 95 applications were reviewed. Seven submitted no analyses. There were 25 cost-comparison/consequence, 14 cost-effectiveness, 11 cost-minimisation, 9 cost-utility/benefit and 29 budget-impact analyses. 65 of these 88 submissions failed to comply with guidelines. Of these, 45% used an inappropriate comparator drug, 61% lacked a sensitivity analysis, 73% used a third-party payer and excluded a societal perspective, 66% did not provide a long term evaluation and 25% did not specify any time horizon. 80% of noncompliant studies were cost-comparison/consequence or budget-impact analyses (p < 0.001, Fisher's Exact). Of 25 cost-comparison/consequence and 29 budget-impact analyses, 19 (76%) and 24 (83%), respectively, were industry-conducted, whereas cost-effectiveness (11 of 14) and cost-utility/benefit (6 of 9) analyses were mostly subcontracted to private consultants or academics (p < 0.001, Fisher's Exact). 74% of all submissions (compliant and noncompliant) were not recommended by the PI for listing as a provincial drug plan benefit, 16% received approval for restricted benefit and 9% were recommended as full benefit. 80% of the noncompliant submissions were not recommended (p = 0.06, Fisher's Exact test). Moreover, a strong association between type of analysis and type of recommendation was found (p = 0.03, Fisher's Exact test). Cost-comparison/consequence and budget-impact analyses were less likely to be recommended. IMPLICATIONS OF FINDINGS: Our findings show poor compliance with guidelines, especially among industry-conducted studies. Possible explanations are lack of expertise in pharmacoeconomics and/or scepticism regarding the importance of guidelines and submission quality in decision making. As corroborated by the strong associations between type of recommendation and compliance, and between type of recommendation and type of analysis, these 2 characteristics have a significant impact on decision making.
BEATBOX v1.0: Background Error Analysis Testbed with Box Models
NASA Astrophysics Data System (ADS)
Knote, Christoph; Barré, Jérôme; Eckl, Max
2018-02-01
The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.
NASA Astrophysics Data System (ADS)
Lash, E. Lara; Schmisseur, John
2017-11-01
Pressure-sensitive paint has been used to evaluate the unsteady dynamics of transitional and turbulent shock wave-boundary layer interactions generated by a vertical cylinder on a flat plate in a Mach 2 freestream. The resulting shock structure consists of an inviscid bow shock that bifurcates into a separation shock and trailing shock. The primary features of interest are the separation shock and an upstream influence shock that is intermittently present in transitional boundary layer interactions, but not observed in turbulent interactions. The power spectral densities, frequency peaks, and normalized wall pressures are analyzed as the incoming boundary layer state changes from transitional to fully turbulent, comparing both centerline and outboard regions of the interaction. The present study compares the scales and frequencies of the dynamics of the separation shock structure in different boundary layer regimes. Synchronized high-speed Schlieren imaging provides quantitative statistical analyses as well as qualitative comparisons to the fast-response pressure sensitive paint measurements. Materials based on research supported by the U.S. Office of Naval Research under Award Number N00014-15-1-2269.
Zokaei, Maryam; Abedi, Abdol-Samad; Kamankesh, Marzieh; Shojaee-Aliababadi, Saeedeh; Mohammadi, Abdorreza
2017-11-01
In this research, for the first time, we successfully developed ultrasonic-assisted extraction and dispersive liquid-liquid microextraction combined with gas chromatography-mass spectrometry as a new, fast and highly sensitive method for determining of acrylamide in potato chips samples. Xanthydrol was used as a derivatization reagent and parameters affecting in the derivatization and microextraction steps were studied and optimized. Under optimum conditions, the calibration curves showed high levels of linearity (R 2 >0.9993) for acrylamide in the range of 2-500ngmL -1 . The relative standard deviation (RSD) for the seven analyses was 6.8%. The limit of detection (LOD) and limit of quantification (LOQ) were 0.6ngg -1 and 2ngg -1 , respectively. The UAE-DLLME-GC-MS method demonstrated high sensitivity, good linearity, recovery, and enrichment factor. The performance of the new proposed method was evaluated for the determination of acrylamide in various types of chips samples and satisfactory results were obtained. Copyright © 2017 Elsevier Ltd. All rights reserved.
Otsuka, Ayano; Takaesu, Yoshikazu; Sato, Mitsuhiko; Masuya, Jiro; Ichiki, Masahiko; Kusumi, Ichiro; Inoue, Takeshi
2017-01-01
Recent studies have suggested that multiple factors interact with the onset and prognosis of major depressive disorders. In this study, we investigated how child abuse, affective temperaments, and interpersonal sensitivity are interrelated, and how they affect depressive symptoms in the general adult population. A total of 415 volunteers from the general adult population completed the Patient Health Questionnaire-9, the Temperament Evaluation of Memphis, Pisa, Paris, and San Diego-Autoquestionnaire version, the Child Abuse and Trauma Scale, and the Interpersonal Sensitivity Measure, which are all self-administered questionnaires. Data were subjected to structural equation modeling (Mplus), and single and multiple regression analyses. The effect of child abuse on depressive symptoms was mediated by interpersonal sensitivity and 4 affective temperaments, including depressive, cyclothymic, anxious, and irritable temperaments. In addition, the effect of these temperaments on depressive symptoms was mediated by interpersonal sensitivity, indicating the indirect enhancement of depressive symptoms. In contrast to these 4 temperaments, the hyperthymic temperament did not mediate the effect of child abuse on depressive symptoms; its effect was not mediated by interpersonal sensitivity. However, a greater hyperthymic temperament predicted decreased depressive symptoms and interpersonal sensitivity, independent of any mediation effect. Because this is a cross-sectional study, long-term prospective studies are necessary to confirm its findings. Therefore, recall bias should be considered when interpreting the results. As the subjects were adults from the general population, the results may not be generalizable towards all patients with major depression. This study suggests that child abuse and affective temperaments affect depressive symptoms partly through interpersonal sensitivity. Interpersonal sensitivity may have a major role in forming the link between abuse, affective temperament, and depression.
Cost-effectiveness of prucalopride in the treatment of chronic constipation in the Netherlands
Nuijten, Mark J. C.; Dubois, Dominique J.; Joseph, Alain; Annemans, Lieven
2015-01-01
Objective: To assess the cost-effectiveness of prucalopride vs. continued laxative treatment for chronic constipation in patients in the Netherlands in whom laxatives have failed to provide adequate relief. Methods: A Markov model was developed to estimate the cost-effectiveness of prucalopride in patients with chronic constipation receiving standard laxative treatment from the perspective of Dutch payers in 2011. Data sources included published prucalopride clinical trials, published Dutch price/tariff lists, and national population statistics. The model simulated the clinical and economic outcomes associated with prucalopride vs. standard treatment and had a cycle length of 1 month and a follow-up time of 1 year. Response to treatment was defined as the proportion of patients who achieved “normal bowel function”. One-way and probabilistic sensitivity analyses were conducted to test the robustness of the base case. Results: In the base case analysis, the cost of prucalopride relative to continued laxative treatment was € 9015 per quality-adjusted life-year (QALY). Extensive sensitivity analyses and scenario analyses confirmed that the base case cost-effectiveness estimate was robust. One-way sensitivity analyses showed that the model was most sensitive in response to prucalopride; incremental cost-effectiveness ratios ranged from € 6475 to 15,380 per QALY. Probabilistic sensitivity analyses indicated that there is a greater than 80% probability that prucalopride would be cost-effective compared with continued standard treatment, assuming a willingness-to-pay threshold of € 20,000 per QALY from a Dutch societal perspective. A scenario analysis was performed for women only, which resulted in a cost-effectiveness ratio of € 7773 per QALY. Conclusion: Prucalopride was cost-effective in a Dutch patient population, as well as in a women-only subgroup, who had chronic constipation and who obtained inadequate relief from laxatives. PMID:25926794
Analytical characteristics of a continuum-source tungsten coil atomic absorption spectrometer.
Rust, Jennifer A; Nóbrega, Joaquim A; Calloway, Clifton P; Jones, Bradley T
2005-08-01
A continuum-source tungsten coil electrothermal atomic absorption spectrometer has been assembled, evaluated, and employed in four different applications. The instrument consists of a xenon arc lamp light source, a tungsten coil atomizer, a Czerny-Turner high resolution monochromator, and a linear photodiode array detector. This instrument provides simultaneous multi-element analyses across a 4 nm spectral window with a resolution of 0.024 nm. Such a device might be useful in many different types of analyses. To demonstrate this broad appeal, four very different applications have been evaluated. First of all, the temperature of the gas phase was measured during the atomization cycle of the tungsten coil, using tin as a thermometric element. Secondly, a summation approach for two absorption lines for aluminum falling within the same spectral window (305.5-309.5 nm) was evaluated. This approach improves the sensitivity without requiring any additional preconcentration steps. The third application describes a background subtraction technique, as it is applied to the analysis of an oil emulsion sample. Finally, interference effects caused by Na on the atomization of Pb were studied. The simultaneous measurements of Pb and Na suggests that negative interference arises at least partially from competition between Pb and Na atoms for H2 in the gas phase.
Loganathan, Rajprasad; Bilgen, Mehmet; Al-Hafez, Baraa; Alenezy, Mohammed D; Smirnova, Irina V
2006-04-04
Diabetes is a major risk factor for cardiovascular disease. In particular, type 1 diabetes compromises the cardiac function of individuals at a relatively early age due to the protracted course of abnormal glucose homeostasis. The functional abnormalities of diabetic myocardium have been attributed to the pathological changes of diabetic cardiomyopathy. In this study, we used high field magnetic resonance imaging (MRI) to evaluate the left ventricular functional characteristics of streptozotocin treated diabetic Sprague-Dawley rats (8 weeks disease duration) in comparison with age/sex matched controls. Our analyses of EKG gated cardiac MRI scans of the left ventricle showed a 28% decrease in the end-diastolic volume and 10% increase in the end-systolic volume of diabetic hearts compared to controls. Mean stroke volume and ejection fraction in diabetic rats were decreased (48% and 28%, respectively) compared to controls. Further, dV/dt changes were suggestive of phase sensitive differences in left ventricular kinetics across the cardiac cycle between diabetic and control rats. Thus, the MRI analyses of diabetic left ventricle suggest impairment of diastolic and systolic hemodynamics in this rat model of diabetic cardiomyopathy. Our studies also show that in vivo MRI could be used in the evaluation of cardiac dysfunction in this rat model of type 1 diabetes.
Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.
McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.
2013-01-01
Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817
Performance of vegetation indices from Landsat time series in deforestation monitoring
NASA Astrophysics Data System (ADS)
Schultz, Michael; Clevers, Jan G. P. W.; Carter, Sarah; Verbesselt, Jan; Avitabile, Valerio; Quang, Hien Vu; Herold, Martin
2016-10-01
The performance of Landsat time series (LTS) of eight vegetation indices (VIs) was assessed for monitoring deforestation across the tropics. Three sites were selected based on differing remote sensing observation frequencies, deforestation drivers and environmental factors. The LTS of each VI was analysed using the Breaks For Additive Season and Trend (BFAST) Monitor method to identify deforestation. A robust reference database was used to evaluate the performance regarding spatial accuracy, sensitivity to observation frequency and combined use of multiple VIs. The canopy cover sensitive Normalized Difference Fraction Index (NDFI) was the most accurate. Among those tested, wetness related VIs (Normalized Difference Moisture Index (NDMI) and the Tasselled Cap wetness (TCw)) were spatially more accurate than greenness related VIs (Normalized Difference Vegetation Index (NDVI) and Tasselled Cap greenness (TCg)). When VIs were fused on feature level, spatial accuracy was improved and overestimation of change reduced. NDVI and NDFI produced the most robust results when observation frequency varies.
NASA Technical Reports Server (NTRS)
Burns, Lee; Merry, Carl; Decker, Ryan; Harrington, Brian
2008-01-01
The 2006 Cape Canaveral Air Force Station (CCAFS) Range Reference Atmosphere (RRA) is a statistical model summarizing the wind and thermodynamic atmospheric variability from surface to 70 kin. Launches of the National Aeronautics and Space Administration's (NASA) Space Shuttle from Kennedy Space Center utilize CCAFS RRA data to evaluate environmental constraints on various aspects of the vehicle during ascent. An update to the CCAFS RRA was recently completed. As part of the update, a validation study on the 2006 version was conducted as well as a comparison analysis of the 2006 version to the existing CCAFS RRA database version 1983. Assessments to the Space Shuttle vehicle ascent profile characteristics were performed to determine impacts of the updated model to the vehicle performance. Details on the model updates and the vehicle sensitivity analyses with the update model are presented.
Additional EIPC Study Analysis: Interim Report on High Priority Topics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hadley, Stanton W
Between 2010 and 2012 the Eastern Interconnection Planning Collaborative (EIPC) conducted a major long-term resource and transmission study of the Eastern Interconnection (EI). With guidance from a Stakeholder Steering Committee (SSC) that included representatives from the Eastern Interconnection States Planning Council (EISPC) among others, the project was conducted in two phases. Phase 1 involved a long-term capacity expansion analysis that involved creation of eight major futures plus 72 sensitivities. Three scenarios were selected for more extensive transmission- focused evaluation in Phase 2. Five power flow analyses, nine production cost model runs (including six sensitivities), and three capital cost estimations weremore » developed during this second phase. The results from Phase 1 and 2 provided a wealth of data that could be examined further to address energy-related questions. A list of 13 topics was developed for further analysis; this paper discusses the first five.« less
Persson, Roger; Høgh, Annie; Grynderup, Matias Brødsgaard; Willert, Morten Vejs; Gullander, Maria; Hansen, Åse Marie; Kolstad, Henrik Albert; Mors, Ole; Mikkelsen, Eva Gemzøe; Kristensen, Ann Suhl; Kaerlev, Linda; Rugulies, Reiner; Bonde, Jens Peter Ellekilde
2016-09-01
To examine whether a shift in work-related bullying status, from being non-bullied to being bullied or vice versa, was associated with changes in reporting of personality characteristics. Data on bullying and personality (neuroticism, extraversion, and sense of coherence) were collected in three waves approximately 2 years apart (N = 4947). Using a within-subjects design, personality change scores that followed altered bullying status were evaluated with one-sample t tests. Sensitivity analyses targeted depressive symptoms. Shifts from non-bullied to frequently bullied were associated with increased neuroticism or decreased sense of coherence manageability scores. Shifts from bullied to non-bullied were associated with decreasing neuroticism and increasing extraversion scores, or increasing sense of coherence meaningfulness and comprehensibility scores. Excluding depressive cases had minor effects. Bullying seems to some extent to affect personality scale scores, which thus seem sensitive to environmental and social circumstances.
NASA Astrophysics Data System (ADS)
Wagener, Thorsten; Pianosi, Francesca
2016-04-01
Sensitivity Analysis (SA) investigates how the variation in the output of a numerical model can be attributed to variations of its input factors. SA is increasingly being used in earth and environmental modelling for a variety of purposes, including uncertainty assessment, model calibration and diagnostic evaluation, dominant control analysis and robust decision-making. Here we provide some practical advice regarding best practice in SA and discuss important open questions based on a detailed recent review of the existing body of work in SA. Open questions relate to the consideration of input factor interactions, methods for factor mapping and the formal inclusion of discrete factors in SA (for example for model structure comparison). We will analyse these questions using relevant examples and discuss possible ways forward. We aim at stimulating the discussion within the community of SA developers and users regarding the setting of good practices and on defining priorities for future research.
Techno-economic assessment of pellets produced from steam pretreated biomass feedstock
Shahrukh, Hassan; Oyedun, Adetoyese Olajire; Kumar, Amit; ...
2016-03-10
Minimum production cost and optimum plant size are determined for pellet plants for three types of biomass feedstock e forest residue, agricultural residue, and energy crops. The life cycle cost from harvesting to the delivery of the pellets to the co-firing facility is evaluated. The cost varies from 95 to 105 t -1 for regular pellets and 146–156 t -1 for steam pretreated pellets. The difference in the cost of producing regular and steam pretreated pellets per unit energy is in the range of 2e3 GJ -1. The economic optimum plant size (i.e., the size at which pellet production costmore » is minimum) is found to be 190 kt for regular pellet production and 250 kt for steam pretreated pellet. Furthermore, sensitivity and uncertainty analyses were carried out to identify sensitivity parameters and effects of model error.« less
Micropatterned comet assay enables high throughput and sensitive DNA damage quantification
Ge, Jing; Chow, Danielle N.; Fessler, Jessica L.; Weingeist, David M.; Wood, David K.; Engelward, Bevin P.
2015-01-01
The single cell gel electrophoresis assay, also known as the comet assay, is a versatile method for measuring many classes of DNA damage, including base damage, abasic sites, single strand breaks and double strand breaks. However, limited throughput and difficulties with reproducibility have limited its utility, particularly for clinical and epidemiological studies. To address these limitations, we created a microarray comet assay. The use of a micrometer scale array of cells increases the number of analysable comets per square centimetre and enables automated imaging and analysis. In addition, the platform is compatible with standard 24- and 96-well plate formats. Here, we have assessed the consistency and sensitivity of the microarray comet assay. We showed that the linear detection range for H2O2-induced DNA damage in human lymphoblastoid cells is between 30 and 100 μM, and that within this range, inter-sample coefficient of variance was between 5 and 10%. Importantly, only 20 comets were required to detect a statistically significant induction of DNA damage for doses within the linear range. We also evaluated sample-to-sample and experiment-to-experiment variation and found that for both conditions, the coefficient of variation was lower than what has been reported for the traditional comet assay. Finally, we also show that the assay can be performed using a 4× objective (rather than the standard 10× objective for the traditional assay). This adjustment combined with the microarray format makes it possible to capture more than 50 analysable comets in a single image, which can then be automatically analysed using in-house software. Overall, throughput is increased more than 100-fold compared to the traditional assay. Together, the results presented here demonstrate key advances in comet assay technology that improve the throughput, sensitivity, and robustness, thus enabling larger scale clinical and epidemiological studies. PMID:25527723
Micropatterned comet assay enables high throughput and sensitive DNA damage quantification.
Ge, Jing; Chow, Danielle N; Fessler, Jessica L; Weingeist, David M; Wood, David K; Engelward, Bevin P
2015-01-01
The single cell gel electrophoresis assay, also known as the comet assay, is a versatile method for measuring many classes of DNA damage, including base damage, abasic sites, single strand breaks and double strand breaks. However, limited throughput and difficulties with reproducibility have limited its utility, particularly for clinical and epidemiological studies. To address these limitations, we created a microarray comet assay. The use of a micrometer scale array of cells increases the number of analysable comets per square centimetre and enables automated imaging and analysis. In addition, the platform is compatible with standard 24- and 96-well plate formats. Here, we have assessed the consistency and sensitivity of the microarray comet assay. We showed that the linear detection range for H2O2-induced DNA damage in human lymphoblastoid cells is between 30 and 100 μM, and that within this range, inter-sample coefficient of variance was between 5 and 10%. Importantly, only 20 comets were required to detect a statistically significant induction of DNA damage for doses within the linear range. We also evaluated sample-to-sample and experiment-to-experiment variation and found that for both conditions, the coefficient of variation was lower than what has been reported for the traditional comet assay. Finally, we also show that the assay can be performed using a 4× objective (rather than the standard 10× objective for the traditional assay). This adjustment combined with the microarray format makes it possible to capture more than 50 analysable comets in a single image, which can then be automatically analysed using in-house software. Overall, throughput is increased more than 100-fold compared to the traditional assay. Together, the results presented here demonstrate key advances in comet assay technology that improve the throughput, sensitivity, and robustness, thus enabling larger scale clinical and epidemiological studies. © The Author 2014. Published by Oxford University Press on behalf of the Mutagenesis Society. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Tan, Chongqing; Peng, Liubao; Zeng, Xiaohui; Li, Jianhe; Wan, Xiaomin; Chen, Gannong; Yi, Lidan; Luo, Xia; Zhao, Ziying
2013-01-01
First-line postoperative adjuvant chemotherapies with S-1 and capecitabine and oxaliplatin (XELOX) were first recommended for resectable gastric cancer patients in the 2010 and 2011 Chinese NCCN Clinical Practice Guidelines in Oncology: Gastric Cancer; however, their economic impact in China is unknown. The aim of this study was to compare the cost-effectiveness of adjuvant chemotherapy with XELOX, with S-1 and no treatment after a gastrectomy with extended (D2) lymph-node dissection among patients with stage II-IIIB gastric cancer. A Markov model, based on data from two clinical phase III trials, was developed to analyse the cost-effectiveness of patients in the XELOX group, S-1 group and surgery only (SO) group. The costs were estimated from the perspective of Chinese healthcare system. The utilities were assumed on the basis of previously published reports. Costs, quality-adjusted life-years (QALYs) and incremental cost-effectiveness ratios (ICER) were calculated with a lifetime horizon. One-way and probabilistic sensitivity analyses were performed. For the base case, XELOX had the lowest total cost ($44,568) and cost-effectiveness ratio ($7,360/QALY). The relative scenario analyses showed that SO was dominated by XELOX and the ICERs of S-1 was $58,843/QALY compared with XELOX. The one-way sensitivity analysis showed that the most influential parameter was the utility of disease-free survival. The probabilistic sensitivity analysis predicted a 75.8% likelihood that the ICER for XELOX would be less than $13,527 compared with S-1. When ICER was more than $38,000, the likelihood of cost-effectiveness achieved by S-1 group was greater than 50%. Our results suggest that for patients in China with resectable disease, first-line adjuvant chemotherapy with XELOX after a D2 gastrectomy is a best option comparing with S-1 and SO in view of our current study. In addition, S-1 might be a better choice, especially with a higher value of willingness-to-pay threshold.
Mimoz, O; Karim, A; Mazoit, J X; Edouard, A; Leprince, S; Nordmann, P
2000-11-01
We evaluated prospectively the use of Gram staining of protected pulmonary specimens to allow the early diagnosis of ventilator-associated pneumonia (VAP), compared with the use of 60 bronchoscopic protected specimen brushes (PSB) and 126 blinded plugged telescopic catheters (PTC) obtained from 134 patients. Gram stains were from Cytospin slides; they were studied for the presence of microorganisms in 10 and 50 fields by two independent observers and classified according to their Gram stain morphology. Quantitative cultures were performed after serial dilution and plating on appropriate culture medium. A final diagnosis of VAP, based on a culture of > or = 10(3) c.f.u. ml-1, was established after 81 (44%) samplings. When 10 fields were analysed, a strong relationship was found between the presence of bacteria on Gram staining and the final diagnosis of VAP (for PSB and PTC respectively: sensitivity 74 and 81%, specificity 94 and 100%, positive predictive value 91 and 100%, negative predictive value 82 and 88%). The correlation was less when we compared the morphology of microorganisms observed on Gram staining with those of bacteria obtained from quantitative cultures (for PSB and PTC respectively: sensitivity 54 and 69%, specificity 86 and 89%, positive predictive value 72 and 78%, negative predictive value 74 and 84%). Increasing the number of fields read to 50 was associated with a slight decrease in specificity and positive predictive value of Gram staining, but with a small increase in its sensitivity and negative predictive value. The results obtained by the two observers were similar to each other for both numbers of fields analysed. Gram staining of protected pulmonary specimens performed on 10 fields predicted the presence of VAP and partially identified (using Gram stain morphology) the microorganisms growing at significant concentrations, and could help in the early choice of the treatment of VAP. Increasing the number of fields read or having the Gram stain analysed by two independent individuals did not improve the results.
Anonychuk, Andrea M; Tricco, Andrea C; Bauch, Chris T; Pham, Ba'; Gilca, Vladimir; Duval, Bernard; John-Baptiste, Ava; Woo, Gloria; Krahn, Murray
2008-01-01
Hepatitis A vaccines have been available for more than a decade. Because the burden of hepatitis A virus has fallen in developed countries, the appropriate role of vaccination programmes, especially universal vaccination strategies, remains unclear. Cost-effectiveness analysis is a useful method of relating the costs of vaccination to its benefits, and may inform policy. This article systematically reviews the evidence on the cost effectiveness of hepatitis A vaccination in varying populations, and explores the effects of methodological quality and key modelling issues on the cost-effectiveness ratios.Cost-effectiveness/cost-utility studies of hepatitis A vaccine were identified via a series of literature searches (MEDLINE, EMBASE, HSTAR and SSCI). Citations and full-text articles were reviewed independently by two reviewers. Reference searching, author searches and expert consultation ensured literature saturation. Incremental cost-effectiveness ratios (ICERs) were abstracted for base-case analyses, converted to $US, year 2005 values, and categorised to reflect various levels of cost effectiveness. Quality of reporting, methodological issues and key modelling issues were assessed using frameworks published in the literature.Thirty-one cost-effectiveness studies (including 12 cost-utility analyses) were included from full-text article review (n = 58) and citation screening (n = 570). These studies evaluated universal mass vaccination (n = 14), targeted vaccination (n = 17) and vaccination of susceptibles (i.e. individuals initially screened for antibody and, if susceptible, vaccinated) [n = 13]. For universal vaccination, 50% of the ICERs were <$US20 000 per QALY or life-year gained. Analyses evaluating vaccination in children, particularly in high incidence areas, produced the most attractive ICERs. For targeted vaccination, cost effectiveness was highly dependent on the risk of infection.Incidence, vaccine cost and discount rate were the most influential parameters in sensitivity analyses. Overall, analyses that evaluated the combined hepatitis A/hepatitis B vaccine, adjusted incidence for under-reporting, included societal costs and that came from studies of higher methodological quality tended to have more attractive cost-effectiveness ratios. Methodological quality varied across studies. Major methodological flaws included inappropriate model type, comparator, incidence estimate and inclusion/exclusion of costs.
HYPNOTIC TACTILE ANESTHESIA: Psychophysical and Signal-Detection Analyses
Tataryn, Douglas J.; Kihlstrom, John F.
2017-01-01
Two experiments that studied the effects of hypnotic suggestions on tactile sensitivity are reported. Experiment 1 found that suggestions for anesthesia, as measured by both traditional psychophysical methods and signal detection procedures, were linearly related to hypnotizability. Experiment 2 employed the same methodologies in an application of the real-simulator paradigm to examine the effects of suggestions for both anesthesia and hyperesthesia. Significant effects of hypnotic suggestion on both sensitivity and bias were found in the anesthesia condition but not for the hyperesthesia condition. A new bias parameter, C′, indicated that much of the bias found in the initial analyses was artifactual, a function of changes in sensitivity across conditions. There were no behavioral differences between reals and simulators in any of the conditions, though analyses of postexperimental interviews suggested the 2 groups had very different phenomenal experiences. PMID:28230465
Automated haematology analysis to diagnose malaria
2010-01-01
For more than a decade, flow cytometry-based automated haematology analysers have been studied for malaria diagnosis. Although current haematology analysers are not specifically designed to detect malaria-related abnormalities, most studies have found sensitivities that comply with WHO malaria-diagnostic guidelines, i.e. ≥ 95% in samples with > 100 parasites/μl. Establishing a correct and early malaria diagnosis is a prerequisite for an adequate treatment and to minimizing adverse outcomes. Expert light microscopy remains the 'gold standard' for malaria diagnosis in most clinical settings. However, it requires an explicit request from clinicians and has variable accuracy. Malaria diagnosis with flow cytometry-based haematology analysers could become an important adjuvant diagnostic tool in the routine laboratory work-up of febrile patients in or returning from malaria-endemic regions. Haematology analysers so far studied for malaria diagnosis are the Cell-Dyn®, Coulter® GEN·S and LH 750, and the Sysmex XE-2100® analysers. For Cell-Dyn analysers, abnormal depolarization events mainly in the lobularity/granularity and other scatter-plots, and various reticulocyte abnormalities have shown overall sensitivities and specificities of 49% to 97% and 61% to 100%, respectively. For the Coulter analysers, a 'malaria factor' using the monocyte and lymphocyte size standard deviations obtained by impedance detection has shown overall sensitivities and specificities of 82% to 98% and 72% to 94%, respectively. For the XE-2100, abnormal patterns in the DIFF, WBC/BASO, and RET-EXT scatter-plots, and pseudoeosinophilia and other abnormal haematological variables have been described, and multivariate diagnostic models have been designed with overall sensitivities and specificities of 86% to 97% and 81% to 98%, respectively. The accuracy for malaria diagnosis may vary according to species, parasite load, immunity and clinical context where the method is applied. Future developments in new haematology analysers such as considerably simplified, robust and inexpensive devices for malaria detection fitted with an automatically generated alert could improve the detection capacity of these instruments and potentially expand their clinical utility in malaria diagnosis. PMID:21118557
Pulverer, Walter; Hofner, Manuela; Preusser, Matthias; Dirnberger, Elisabeth; Hainfellner, Johannes A; Weinhaeusel, Andreas
2014-01-01
MGMT promoter methylation is associated with favorable prognosis and chemosensitivity in glioblastoma multiforme (GBM), especially in elderly patients. We aimed to develop a simple methylation-sensitive restriction enzyme (MSRE)-based quantitative PCR (qPCR) assay, allowing the quantification of MGMT promoter methylation. DNA was extracted from non-neoplastic brain (n = 24) and GBM samples (n = 20) upon 3 different sample conservation conditions (-80 °C, formalin-fixed and paraffin-embedded (FFPE); RCL2-fixed). We evaluated the suitability of each fixation method with respect to the MSRE-coupled qPCR methylation analyses. Methylation data were validated by MALDITOF. qPCR was used for evaluation of alternative tissue conservation procedures. DNA from FFPE tissue failed reliable testing; DNA from both RCL2-fixed and fresh frozen tissues performed equally well and was further used for validation of the quantitative MGMT methylation assay (limit of detection (LOD): 19.58 pg), using individual's undigested sample DNA for calibration. MGMT methylation analysis in non-neoplastic brain identified a background methylation of 0.10 ± 11% which we used for defining a cut-off of 0.32% for patient stratification. Of GBM patients 9 were MGMT methylationpositive (range: 0.56 - 91.95%), and 11 tested negative. MALDI-TOF measurements resulted in a concordant classification of 94% of GBM samples in comparison to qPCR. The presented methodology allows quantitative MGMT promoter methylation analyses. An amount of 200 ng DNA is sufficient for triplicate analyses including control reactions and individual calibration curves, thus excluding any DNA qualityderived bias. The combination of RCL2-fixation and quantitative methylation analyses improves pathological routine examination when histological and molecular analyses on limited amounts of tumor samples are necessary for patient stratification.
Transportation systems analyses. Volume 2: Technical/programmatics
NASA Astrophysics Data System (ADS)
1993-05-01
The principal objective of this study is to accomplish a systems engineering assessment of the nation's space transportation infrastructure. This analysis addresses the necessary elements to perform man delivery and return, cargo transfer, cargo delivery, payload servicing, and the exploration of the Moon and Mars. Specific elements analyzed, but not limited to, include the Space Exploration Initiative (SEI), the National Launch System (NLS), the current expendable launch vehicle (ELV) fleet, ground facilities, the Space Station Freedom (SSF), and other civil, military and commercial payloads. The performance of this study entails maintaining a broad perspective on the large number of transportation elements that could potentially comprise the U.S. space infrastructure over the next several decades. To perform this systems evaluation, top-level trade studies are conducted to enhance our understanding of the relationships between elements of the infrastructure. This broad 'infrastructure-level perspective' permits the identification of preferred infrastructures. Sensitivity analyses are performed to assure the credibility and usefulness of study results. This report documents the three principal transportation systems analyses (TSA) efforts during the period 7 November 92 - 6 May 93. The analyses are as follows: Mixed-Fleet (STS/ELV) strategies for SSF resupply; Transportation Systems Data Book - overview; and Operations Cost Model - overview/introduction.
Why tropical forest lizards are vulnerable to climate warming
Huey, Raymond B.; Deutsch, Curtis A.; Tewksbury, Joshua J.; Vitt, Laurie J.; Hertz, Paul E.; Álvarez Pérez, Héctor J.; Garland, Theodore
2009-01-01
Biological impacts of climate warming are predicted to increase with latitude, paralleling increases in warming. However, the magnitude of impacts depends not only on the degree of warming but also on the number of species at risk, their physiological sensitivity to warming and their options for behavioural and physiological compensation. Lizards are useful for evaluating risks of warming because their thermal biology is well studied. We conducted macrophysiological analyses of diurnal lizards from diverse latitudes plus focal species analyses of Puerto Rican Anolis and Sphaerodactyus. Although tropical lowland lizards live in environments that are warm all year, macrophysiological analyses indicate that some tropical lineages (thermoconformers that live in forests) are active at low body temperature and are intolerant of warm temperatures. Focal species analyses show that some tropical forest lizards were already experiencing stressful body temperatures in summer when studied several decades ago. Simulations suggest that warming will not only further depress their physiological performance in summer, but will also enable warm-adapted, open-habitat competitors and predators to invade forests. Forest lizards are key components of tropical ecosystems, but appear vulnerable to the cascading physiological and ecological effects of climate warming, even though rates of tropical warming may be relatively low. PMID:19324762
Why tropical forest lizards are vulnerable to climate warming.
Huey, Raymond B; Deutsch, Curtis A; Tewksbury, Joshua J; Vitt, Laurie J; Hertz, Paul E; Alvarez Pérez, Héctor J; Garland, Theodore
2009-06-07
Biological impacts of climate warming are predicted to increase with latitude, paralleling increases in warming. However, the magnitude of impacts depends not only on the degree of warming but also on the number of species at risk, their physiological sensitivity to warming and their options for behavioural and physiological compensation. Lizards are useful for evaluating risks of warming because their thermal biology is well studied. We conducted macrophysiological analyses of diurnal lizards from diverse latitudes plus focal species analyses of Puerto Rican Anolis and Sphaerodactyus. Although tropical lowland lizards live in environments that are warm all year, macrophysiological analyses indicate that some tropical lineages (thermoconformers that live in forests) are active at low body temperature and are intolerant of warm temperatures. Focal species analyses show that some tropical forest lizards were already experiencing stressful body temperatures in summer when studied several decades ago. Simulations suggest that warming will not only further depress their physiological performance in summer, but will also enable warm-adapted, open-habitat competitors and predators to invade forests. Forest lizards are key components of tropical ecosystems, but appear vulnerable to the cascading physiological and ecological effects of climate warming, even though rates of tropical warming may be relatively low.
Schulte-Braucks, Julia; Baethge, Anja; Dormann, Christian; Vahle-Hinz, Tim
2018-04-23
We proposed that effects of illegitimate tasks, which comprise unreasonable and unnecessary tasks, on self-esteem and counterproductive work behavior (CWB) are enhanced among employees who are highly sensitive to injustice. CWB was further proposed to be a moderating coping strategy, which restores justice and buffers the detrimental effects of illegitimate tasks on self-esteem. In this study, 241 employees participated in a diary study over five workdays and a follow-up questionnaire one week later. Daily effects were determined in multilevel analyses: Unreasonable tasks decreased self-esteem and increased CWB the same day, especially among employees high in trait justice sensitivity. Unnecessary tasks only related to more CWB the same day, regardless of one's justice sensitivity. Weekly effects were determined in cross-lagged panel analyses: Unreasonable and unnecessary tasks increased CWB, and justice sensitivity moderated the effect of unreasonable tasks on CWB and of unnecessary tasks on self-esteem. Moderating effects of CWB were split: In daily analyses, CWB buffered the negative effects of illegitimate tasks. In weekly analyses, CWB enhanced the negative effects of illegitimate tasks. Overall, illegitimate tasks rather affected CWB than self-esteem, with more consistent effects for unreasonable than for unnecessary tasks. Thus, we confirm illegitimate tasks as a relevant work stressor with issues of injustice being central to this concept and personality having an influence on what is perceived as (il)legitimate. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Loong, Bronwyn; Zaslavsky, Alan M; He, Yulei; Harrington, David P
2013-10-30
Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents' identities and sensitive attributes by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by the Cancer Care Outcomes Research and Surveillance (CanCORS) project, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the USA. We review inferential methods for partially synthetic data and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Babaveisi, Vahid; Paydar, Mohammad Mahdi; Safaei, Abdul Sattar
2018-07-01
This study aims to discuss the solution methodology for a closed-loop supply chain (CLSC) network that includes the collection of used products as well as distribution of the new products. This supply chain is presented on behalf of the problems that can be solved by the proposed meta-heuristic algorithms. A mathematical model is designed for a CLSC that involves three objective functions of maximizing the profit, minimizing the total risk and shortages of products. Since three objective functions are considered, a multi-objective solution methodology can be advantageous. Therefore, several approaches have been studied and an NSGA-II algorithm is first utilized, and then the results are validated using an MOSA and MOPSO algorithms. Priority-based encoding, which is used in all the algorithms, is the core of the solution computations. To compare the performance of the meta-heuristics, random numerical instances are evaluated by four criteria involving mean ideal distance, spread of non-dominance solution, the number of Pareto solutions, and CPU time. In order to enhance the performance of the algorithms, Taguchi method is used for parameter tuning. Finally, sensitivity analyses are performed and the computational results are presented based on the sensitivity analyses in parameter tuning.
Yan, Shi; Jin, YinZhe; Oh, YongSeok; Choi, YoungJun
2016-06-01
The aim of this study was to assess the effect of exercise on depression in university students. A systematic literature search was conducted in PubMed, EMBASE and the Cochrane library from their inception through December 10, 2014 to identify relevant articles. The heterogeneity across studies was examined by Cochran's Q statistic and the I2 statistic. Standardized mean difference (SMD) and 95% confidence interval (CI) were pooled to evaluate the effect of exercise on depression. Then, sensitivity and subgroup analyses were performed. In addition, publication bias was assessed by drawing a funnel plot. A total of 352 participants (154 cases and 182 controls) from eight included trials were included. Our pooled result showed a significant alleviative depression after exercise (SMD=-0.50, 95% CI: -0.97 to -0.03, P=0.04) with significant heterogeneity (P=0.003, I2=67%). Sensitivity analyses showed that the pooled result may be unstable. Subgroup analysis indicated that sample size may be a source of heterogeneity. Moreover, no publication bias was observed in this study. Exercise may be an effective therapy for treating depression in university students. However, further clinical studies with strict design and large samples focused on this specific population should be warranted in the future.